Lemon scions may also increase the susceptibility of the root stock to gummosis

The nucleotide sequence of the putative DNA-binding domain of SlARF6A was amplified and fused to that of the glutathione S-transferase tag in a pGEX-4T-1 bacterial expression vector and expressed using Escherichia coli strain BM Rosetta . Isopropyl-β-D-thiogalactopyranoside was used to induce recombinant protein expression, and a GST-Tagged Protein Purification Kit was used to purify the protein. Purified recombinant proteins and biotin-labeled fragments of the target promoters were used to conduct EMSA with a Light Shift Chemiluminescent EMSA kit based on the method described in detail by Han et al.. The Pierce Biotin 3’ End DNA Labeling Kit was employed to label the probe containing the TGTCTC sequence with biotin. The unlabeled same sequence was used in the assay as a competitor. To generate the mutant probe, the TGTCTC DNA fragment was changed to AAAAAA. Biotin-labeled DNA was assayed via a ChemiDoc™ MP Imaging System based on the manufacturer’s procedures. All primers for the EMSA are listed in Supplementary Table S3.Many varieties of Citrus grown commercially will reproduce reasonably true to type from seed through a phenomenon known as nucellar embryony as described by Frost and Soost . These include sweet orange, Rough lemon, Cleopatra mandarin, Troyer citrange and others. Other varieties such as the pummelos, citrons, Algerian tangerine, Temple orange, etc. are monoembryonic and will not reproduce true to type from seed since they only produce hybrid progeny. Still others, such as certain lemons, Mexican lime, nansho-daidai, yuzu, etc. have relatively low polyembryony. On the other hand, some varieties such as the Washington navel orange, Bearss lime, Satsuma mandarin,vertical hydroponic and Pixie mandarin are seedless, or nearly so, and it would be impossible to obtain seed in adequate quantities.

Seedling trees of most varieties are vigorous upright growers, extremely thorny, and are late in coming into bearing. Many citrus species and varieties also do not inherently possess adequate hardiness to cold, resistance to soil-borne diseases, tolerance to salinity or high water tables, and other desirable qualities that would enable them to survive long in their planted environment as seedlings. Consequently, some are therefore occasionally propagated by some vegetative means such as cuttings, layers, or marcots, but generally by grafting or budding onto a root stock of some closely related variety, species, or even genera, or hybrids thereof, to take advantage of the root stock’s influence. The latter method is the one generally used in the propagation of citrus varieties, and the general technique is described by Platt and Opetz . The root stock and the scion interact with each other to produce stionic effects which may have a three-way influence. First, there is the influence of the scion upon the stock. For example, the scion may increase the sodium uptake of the root stock. The depth of root penetration or the extent and configuration of the root system may vary on a given root stock, depending upon the scion variety budded upon it. The root stock may greatly alter the scion. It may dwarf or invigorate it. Yields may be increased or decreased; fruit size may be altered; fruit quality can be affected; hardiness of the scion may also be influenced; and maturity and precociousness of the scion are other considerations. The union of a stock and a scion may give rise to a combination which may be affected by an external factor which by itself affects neither the stock nor the scion individually. Such a situation exists when the virus disease tristeza is present. The sweet orange by itself is not measurably affected; the sour orange by itself is not viscerally affected. However, when sweet orange is budded upon sour orange and tristeza virus is present, the combination will decline or die from the disease. If sour orange is budded upon sweet orange, there is no expression of the disease, even though the virus is present.

While it has long been recognized that the stock and scion have a reciprocal influence on each other, there must be a certain affinity or congenial relationship between them for healthy development of the composite plant. Different root stocks vary in their adaptability to grow on different soils and under different climatic conditions, as well as with different scion varieties. All of these factors will be thoroughly discussed in depth individually at a more appropriate place in this monograph. It is ridiculous to assume that any one root stock will have the general qualities to meet each grower’s needs, yet each individual grower’s specific root stock need is a critical choice for the success of his orchard. The successful choice of a root stock is important because it is to be a permanent part of that orchard and cannot be changed at will like a cultural practice, a fertilizer program, and irrigation schedule or pest control procedures. A considerable fund of information has accumulated in recent years throughout the citrus producing regions of the world concerning root stock reactions under different conditions. A recent comprehensive, but concise, review of much of the body of knowledge has been made by Wutscher . The last previous detailed treatment of the subject was by Webber and Batchelor and Rounds . The purpose of the present monograph, therefore, is that it will emphasize the importance of careful root stock selection for different commercial citrus varieties. It is an attempt to evaluate the response of the root stock to the influence of its total environment; describe the difficulties commonly encountered in choosing desirable root stocks; stress the need not only of considering the results of scientifically planned field experiments, but also the findings and observations of discriminating growers; and review critically the results of root stocks and related experiments. In addition, much original data and observations on trials conducted by the writer during a 40 year period at the Citrus Research Center of the University of California, Riverside are presented.

These are based on nearly 500 root stocks and 100,000 trees grown by the author and supplemented by numerous observations in commercial orchards throughout the world. It is just as important to use carefully selected root stocks of superior performance as it is to use selected superior fruit varieties. The selection of a root stock should be for the purpose of enhancing the merits of a scion variety, or adapting it to its total environment, rather than merely to follow local custom. The selection of improved fruit varieties has been in progress for centuries, but the choice of the best root stocks to use has not received much attention prior to a hundred years ago, and most of it has been in the last fifty years. Although propagation by budding and grafting was understood many centuries ago in China and elsewhere, it was mostly considered a curiosity rather than a practical measure,vertical lettuce tower and most commercial citrus trees throughout the world were grown as seedlings. Schenk states that citrus was budded in China before the time of Christ. The author finds this acceptable but extremely difficult to document. In Han Yen-chih’s “Chü Lu” written in 1178 A.D., and translated into English in 1923, he does describe the grafting process. However, he does indicate the method for grafting trees will be found in the work called “Ssu Shih Tsuan Yao,” which I have not been able to find a record of. Greek and Roman references are numerous. According to Condit the grafting and budding of fruit trees were common practices in the time of Theophrastus, who said, “The ingrafted part uses the other as an ordinary plant uses the ground. Whenever they have split the trunk, they insert the scion which they have fashioned to a wedge-shape; then with a mallet they drive it in to fit as snugly as possible.” Virgil in his Georgics and Ecologues, according to Condit , provided an explicit as well as a poetic account of grafting. Briefly it is as follows: “Nor is there one sole way to graft and bud, for where young eyes from the trees bark swell forth, bursting their slender sheaths, a slit is made just as the knot; and here they fasten in the shoot from stranger tree and bid it thrive in the moist sapwood. Or, smooth trunks are gashed and wedges through the solid timber driven. Then fruit scions set; in no long time the tall trees skyward lifts its laden boughs and sees with wonder what strange leaves it bears and fruitage not its own.” According to Gallesio , Palladius, who is thought to have written at some time in the 5th century states, “They graft the citron in April in warm districts and in May in colder latitudes, placing the graft not upon the bark, but opening the stem or trunk near the ground.” Also, wood cuts from Ferrari’s “Hesperides” clearly show the grafting technique being practiced in an ancient herbarium . In the early history of citrus culture, especially in the western world, consumption of the fruit was principally restricted to the area in which it was produced. As faster and more convenient transportation developed, new consumer markets developed, which in turn stimulated new plantings. New plantings created new problems.

Probably the general use of grafted or budded trees first became an accepted practice as a result of the outbreaks of “foot-rot,” in the middle of the 19th century, as pointed out by Fawcett as it occurred in the Azores in 1834. It was observed in the Mediterranean area that the sour orange was resistant to the disease and could be successfully used as a root stock in place of the susceptible sweet orange varieties. The introduction of seedless commercial varieties such as the Valencia Orange, Pera, or Marsh grapefruit to the industry also gave impetus to propagation by budding. Since the middle of the 20th century, considerable attention has been given to the choice of the best root stocks for the different varieties and soil conditions. However, for the most part, relevance has been placed on the results obtained in a given area with one stock, or at best with only two or three others for comparison. For nearly half a century in California, root stock choice was based on the adage, “Use sweet orange on the light soils and sour orange on the heavy soils.” If a root stock gave commercially successful results, it was generally considered satisfactory, with little incentive to search for a better one. This attitude still prevails in many citrus producing countries today. The greater part of the information available in any locality, with reference to successful stocks, was based on the experience of the growers, and often this is the most reliable available. It must be recognized, however, that such local experiences are inadequate, as they do not include replicated trials with a sufficient number of stocks or scion sources over a long enough period of time to supply valid comparisons. This is a situation which has prevailed in nearly every citrus growing area, but since the 1940’s growing emphasis has been given to systematic trials, especially in the United States and Brazil. Probably the one single factor which has given more impetus than any other to the recent emphasis on citrus root stock trials around the world is the occurrence of tristeza, and the intricate response of various stock-scion combinations to its presence. While the disease has primarily affected the sweet orange and certain other scions on sour orange root stock, many other root stocks and combinations can be affected. See the section on tristeza elsewhere in this monograph. Studies on the etiology of tristeza focused greater attention on other diseases caused by transmissible agents which are often influenced in their reaction by certain root stock-scion combinations or the root stock itself . The inroads of disease usually necessitate growing citrus where other citrus has previously grown, and the complex replant problem becomes a serious consideration. Rising production costs economically demand greater production per acre either in tons of fruit or pounds of soluble solids. Greater production necessitates new markets and the competition increases demands for better quality fruit, both for fresh fruit and for concentrate or other purposes. In order to solve these problems, it becomes increasingly necessary to obtain the greatest production at the least possible cost.

The HPT-Ds element described in the present study is a novel Ds whose HPT gene has a dual function

A total of 26 stably transformed callus lines were obtained. In the condition without DEX treatment, five calli were randomly selected from each callus line and GUS-assayed for detection of transposant cells. Transposant cells were detected in 84.6% of callus lines but mosaic GUS patterns occurred at low frequency as compared with the GUS patterns of untreated pJJ86 calli . GUS assays were also carried out on 14 of pHPT-Ds1 transformed plantlets; 57.1% plantlets contained transposant cells that were rarely distributed in the tissue . The results of pJJ86 transformants and pHPT-Ds1 transformants indicated that there was background transposition activity in the rice calli and plantlets selected from hygromycinmedia, and that the growth of rice cells containing HPT-Ds transposition events were partially suppressed by hygromycin counter selection. To characterize the HPT-Ds excision events, rice genomic DNA of eight GUS-positive pHPT-Ds1 transformants was extracted and examined in nested polymerase chain reaction reactions using Ubi- and GUS-specific primers . Reconstructed Ubi:GUS sequence containing the HPT-Ds empty donor site was confirmed by sequencing the 657-bp PCR product . These results suggested that HPT-Ds elements in the pHPT-Ds1 transformants excised from the T-DNA. To get more information about the background transposition in the GVG-inducible AcPTase system, we constructed pHPT-Ds3 and pHPT-Ds4 by removing the 35S:GVG from pHPT-Ds1 and pHPT-Ds2, respectively. According to the GUS assay results of pHPT-Ds3 transformed callus lines, 57.1% of the callus lines showed somatic transposition. The mosaic GUS patterns of pHPT-Ds3 transformants were similar to those of pHPT-Ds1 transformants and the transposition frequency was a little lower than 84.6% of the pHPT-Ds1 calli. Our explanation for the results of pHPT-Ds1 and pHPT-Ds3 is that the background transposition in the GVG-inducible Ac-Ds system was primarily due to a low-level leaky expression of 4xUAS:AcTPase.

The system of bacterial Cre-lox site-specific recombination was shown to be a useful tool for the generation of chromosomal rearrangements in plants . To stabilize transposed HPT-Ds,indoor vertical farming we used Cre-lox system to delete AcTPase after HPT-Ds transposition. pHPT-Ds5 and pHPT-Ds6 carry AcTPase flanked by two lox sites and the Cre gene that is separated from the upstream Ubi by HPT-Ds. To control AcTPase, the two vectors were designed in such a way that HPT-Ds can transpose in the rice genome and excision of HPT-Ds reconstructs Ubi:Cre and Cre recombinase mediates lox-lox recombination and thereby deletes AcTPase . For examination of rice cells containing deletion events, GUS and Bar were used in pHPTDs5 and pHPT-Ds6, respectively. We transformed pHPT-Ds5 into rice cultivars Taipei 309 and Nipponbare. Three of four Taipei 309 transformants and 23 of 30 Nipponbare transformants showed transposition as shown by mosaic GUS patterns . Genomic DNA of the Taipei 309 transformants was examined in nested PCR reactions using Ubi- and Cre-specific primers. Reconstructed Ubi:Cre sequence containing the HPT-Ds EDS was confirmed by sequencing the 0.6-kb PCR product . A 4.3-kb fragment containing the HPT-Ds full donor site in T-DNA was also amplifified in PCR of the transformants. Additionally, the pHPT-Ds6 vector was transformed into Nipponbare and the Ubi:Cre sequence was detected in genomic DNA of five of the eight pHPT-Ds6 transformants . Using adaptor-ligation PCR , we successfully cloned the rice genomic sequences flanking the HPT-Ds terminus from one pHPT-Ds5-transformant and one pHPT-Ds6- transformant . BLAST analysis showed that the flanking sequences were from the rice chromosomes 6 and 4, respectively, thereby confirming the reinsertion of excised HPT-Ds in the rice genome.

In analysis of T1 populations of pHPT-Ds5 plants, four of six T1 families showed transposition based on spotted GUS staining of the leaf tissues. These results indicated that the HPT-Ds element in pHPT-Ds5 and pHPTDs6 transformants transposed in rice and that the transposition restored Cre expression and induced deletion of AcTPase.During plant transformation and selection, HPT expression relies on the upstream Ubi promoter to confer resistance to hygromycin in selection media. In case of transposition, the HPT gene may be inactive because the 5 flanking sequence of HPT-Ds at a new genomic site may not be able to provide promoter activity. It is conceivable that most of the transposant cells become sensitive to hygromycin. Therefore, the counter-selection nature of the HPT gene in HPT-Ds can be used to diminish transposant cells in newly transformed rice calli on hygromycin media. In testing pHPT-Ds1 and pHPT-Ds3, it was observed that early transposition events in transformed calli and plantlets were suppressed by hygromycin. Few transposant cells in the calli and plantlets were able to grow under the hygromycin selection pressure, which might be due to escaping transposant cells or because of promoter activity of the 5 transposon flanking sequence. Because transposition requires transposase, an important theme in transposon tagging research is how to efficiently control transposase activity. It was reported that AcTPase driven by strong promoters mediated high-frequency Ds excision in several dicot plants . Strong double enhancers of CaMV 35S promoter adjacent to wildtype Ac element induced high-frequency Ac excision in rice transformation . In the present study, we have used the GVG-inducible promoter to control AcTPase expression and transposition was induced to high levels by DEX treatment of pJJ86 transformed callus.

However, we also observed a leaky expression of AcTPase in the GVG-inducible Ac system in the transformants of pJJ86, pHPT-Ds1 and pHPT-Ds7 based on GUS assay results. Our explanation is that the transposition background was primarily from a low level of leaky expression of 4xUAS:AcTPase. Consistently, in the pHPT-Ds3 and pHPT-Ds5 vectors that do not have 35S:GVG, 57.1% of the pHPT-Ds3 transformants and 76.6% of the pHPT-Ds5 transformants still showed transposition in somatic cells. In spite of the wild type Ac element having a weak promoter that supports only 0.2% expression of the CaMV 35S promoter , the wildtype Acitself can transpose in rice with a relatively low activity for three successive generations . This indicates that a weak expression of AcTPase can cause transposition events. In Southern blot analysis of genomic DNA of pHPT-Ds7 and pHPT-Ds8 transformants, the 5.4 kb hybridizing band represented the HPT-Ds at FDS in T-DNA. For the hybridizing bands larger or smaller than 5.4 kb, we explain that some of the bands might be from transposed HPT-Ds. The pHPTDs7 transformants showed transposition in somatic cells as suggested by GUS assay results. Because the rice genomic DNA for Southern hybridization was extracted from few leavesof a transformant, transposition in other leaves might not have been detected in the results. Also, since a rice transformant may have more than one T-DNA copy and may contain rearranged T-DNA,hydroponic vertical farming the hybridizing bands larger or smaller than 5.4 kb might possibly be from transgene rearrangement. Nevertheless, the efficacy of the HPT-Ds element when it was brought together with the GVG-inducible-AcTPase and the Cre-lox recombination system in pHPT-Ds7 and pHPT-Ds8 was confirmed by GUS assay and Southern blot analysis. For inducible Ac-Ds system, it was reported that in Arabidopsis AcTPase controlled by a heat shock promoter transactivated Ds upon heat shock treatment of flowering plants and the transposition was subsequently stabilized by release of the heat shock treatment . The heat shock method used in Arabidopsis seems impractical for rice because of the difficulty of heat shock treatment of a large number of rice plants. But for the GVG-inducible Ac-Ds system, the transgenic rice plants can be treated with DEX by hydroponics or by spray to induce transposition to higher frequency given that the treatment condition is optimized. Because the Cre-lox-based strategy will help delete AcTPase and thereby stabilize transposed HPTDs elements, we will be able to use GVG-inducible AcTPase to induce higher levels of transposition while using the Cre-lox system to stabilize transposition. The pHPT-Ds7 and pHPTDs8 vectors contain both GVG-inducible AcTPase and Crelox systems and therefore provide a good solution to major drawbacks in the Ac-Ds system. Further work needs to be done with the pHPT-Ds8 vector to determine how to enhance transposition by DEX induction and how to use the Bar gene to select Basta-resistant transposant progeny. In summary, we have constructed a series of Ac-Ds transposon tagging vectors and tested individual approaches to control AcTPase expression and transposition in transgenic rice. The pJJ86 and pDs-Ac-GVG vectors were made for testing GVGinducible AcTPase; the pHPT-Ds1 vector was for testing both GVG-inducible AcTPase and HPT-Ds that contains a dualfunctional HPT gene; the pHPT-Ds5 and pHPT-Ds6 vectors were for testing the deletion of AcTPase via Cre-lox recombination.

The pHPT-Ds7 and pHPT-Ds8 vectors contain all the features of GVG-inducible AcTPase, HPT-Ds and Cre-lox recombination and were tested for comprehensive control of AcTPase and HPT-Ds. The Ac-Ds transposon tagging vectors described in the present paper are publicly available, and provide useful resources for the functional genomics of a wide range of plants and especially for that of monocot plants.Introgressions from wild species are important resources for broadening the genetic base of cultivated species, particularly for traits where little variability currently exists. This is certainly the case for cultivated tomato , an economically important vegetable crop species with limited genetic variability . The genetic diversity of tomato has been augmented through introgression of alleles from several closely related wild species . One of these species, Solanum habrochaites, has been an important source of favorable alleles for horticultural traits such as yield, fruit size, and fruit quality . This wild species also contains genes for resistance to major tomato diseases such as late blight, bacterial canker, gray mold, and early blight . In cultivated tomato, genetic diversity is particularly lacking for resistance to late blight disease caused by Phytophthora infestans . Late blight is an economically important and devastating disease of both tomato and potato because it results in approximately $5 billion in annual crop losses and chemical control costs . S. habrochaites has genetic resistance to P. infestans. QTL for quantitative resistance to P. infestans from S. habrochaites have been mapped on each of tomato’s 12 chromosomes . Three of these QTL were then fifine-mapped by Brouwer and St.Clair using near-isogenic lines . QTL affecting horticultural traits including plant height, plant shape, maturity, yield, and fruit size were co-located and/or linked with each of these resistance QTL, suggesting the potential for linkage drag in crosses between S. lycopersicum and S. habrochaites. Subsequently, we mapped the QTL on S. habrochaites chromosome 11 at higher resolution using sub-NILs and detected multiple closely linked QTL controlling both foliar and stem resistance to P. infestans within a 9.4-cM region . To gain a better understanding of the genetic basis of QTL controlling horticultural traits and their linkage relationships with QTL for resistance to P. infestans, we used this same set of sub-NILs in the present study to map loci controlling horticultural traits and determine linkage relationships among them and with P. infestans resistance QTL. We also sought to identify useful breeding material with improved late blight resistance in this set of sub-NILs. In the present study, we further investigated the P. infestans resistance QTL lb11 region identified by Brouwer and St. Clair , conferred by a S. habrochaites introgression on tomato chromosome 11 as a potential source of useful quantitative resistance to late blight disease of tomato. Specifically, our goals in this study were to: assess the effects and extent of linkage drag of QTL controlling horticultural traits with P. infestans resistance QTL on S. habrochaites chromosome 11; identify markers closely linked to P. infestans resistance QTL and to positive alleles at horticultural QTL to facilitate MAS breeding; and identify potentially useful breeding lines for future breeding of tomato cultivars with improved quantitative resistance to late blight disease.We developed a set of sub-near-isogenic lines in S. lycopersicum for a chromosome 11 introgression containing resistance QTL from P. infestans-resistant S. habrochaites accession LA2099 via marker assisted selection during back crossing and selfing generations, as described by Johnson et al. . Methods used for genomic DNA extractions, genotyping with chromosome 11 PCR-based markers , primer sequences, enzymatic reaction conditions, and restriction enzymes used for each marker were described by Johnson et al. . We genotyped 1902 BC6S1 progeny to identify recombinant subNIL progeny for the chromosome 11 introgression from S. habrochaites; of these progeny, a subset of 852 progeny was used to construct a linkage map for the introgressed region .

All these results demonstrate that NRG2 is an important nitrate regulator

CPK10, CPK30, and CPK32 are subgroup III Ca2+-sensor protein kinases . The activity of CPKs can be enhanced in response to nitrate within 10 min. They have all been identified as master regulators that orchestrate primary nitrate responses. Analysis of the single cpk10, cpk30, and cpk32 mutants has shown that they only trivially affect nitrate-responsive genes. However, in the double mutants cpk10 cpk30, cpk30 cpk32, and cpk10 cpk32 and the triple mutant cpk10 cpk30 cpk32, nitrate-responsive marker genes were reduced. Transcriptomic analysis showed that CPK10, CPK30, and CPK32 modulated various key cellular and metabolic functions immediately activated by nitrate. Furthermore, CPK10, CPK30, and CPK32 can phosphorylate NLP7 at Ser205 in vivo in the presence of nitrate, and trigger the nitrate-CPK-NLP signaling network.Recently, three other nitrate regulatory genes NRG2, CPSF30-L, and FIP1 were identified using a forward genetics method. Two independent NRG2 T-DNA insertion lines showed reduced induction for nitrate-responsive sentinel genes , indicating that NRG2 plays an essential role in nitrate signaling. At the physiological level, NRG2 affects accumulation of nitrate in plants. Further investigation revealed that it regulates nitrate uptake by roots and the translocation of nitrate within plants. These effects might be achieved through modulating NRT1.1 and NRT1.8 as the expression of both genes was altered in the mutants. Genetic and molecular data suggest that NRG2 can regulate the expression and work upstream of NRT1.1, but function independently, with NLP7 in regulating nitrate signaling. In addition,indoor growers transcriptomic analysis showed that four clusters in the differentially expressed genes in nrg2 mutant were involved in the regulation of nitrate transport and response, confirming that NRG2 plays essential roles in nitrate regulation.

Interestingly, NRG2 can directly interact with NLP7 in vitro and in vivo, as revealed by yeast two hybrid and BiFC experiments.In addition, the Arabidopsis genome harbors 15 members that are homologous with the NRG2 protein. All members of the NRG2 family contain two unknown conserved domains: DUF630 and DUF632. Whether and which other members of the NRG2 family are involved in nitrate signaling and what functions the two domains play are interesting and pertinent directions for future research. The CPSF30 gene encodes 28-kD and 65-kD proteins. The 28-kD protein was identified as a cleavage and polyadenylation specificity factor; the protein contains three characteristic CCCH zinc finger motifs and functions as both an endonuclease and an RNA-binding protein. An additional YTH domain, along with the three zinc finger motifs, are contained in the 65-kD protein. A mutant allele of CPSF30, cpsf30-2 with a G-to-A mutation in the first exon of gene CPSF30, was identified by genetic screening and used to explore the functions of CPSF30. The expression of nitrate-responsive genes can be down regulated in response to nitrate in cpsf30-2 compared to wild-type and restored to wild-type levels in a complemented CPSF30-L/cpsf30-2 line, indicating that CPSF30-L is involved in nitrate signaling. CPSF30-L can regulate nitrate accumulation and assimilation at the physiological level. Transcriptomic analysis showed that genes involved in six nitrogen-related clusters, including nitrate transport and assimilation, were differentially expressed in the cpsf30-2 mutant. Further study revealed that CPSF30 could work upstream of NRT1.1 and independently of NLP7. CPSF30 can also affect NRT1.1 mRNA 30 UTR alternative polyadenylation. All these results demonstrate that CPSF30 plays an important role in the primary nitrate response. FIP1, a factor interacting with poly polymerase 1, was identified as a positive nitrate regulatory gene using the fifip1 mutant and a FIP1/fifip1 line.

Nitrate-induced expression of NIA1, NiR, and NRT2.1 is repressed in the fifip1 mutant and can be restored to the wild type in the FIP1/fifip1 line. Furthermore, FIP1 can affect nitrate accumulation through regulating the expression of NRT1.8 and nitrate assimilation genes. Further research found that FIP1 could interact with CPSF30 and both genes can regulate the expression of CIPK8 and CIPK23. In addition, FIP1 can affect the 3 0 UTR polyadenylation of NRT1.1, a similar function to CPSF30. CPSF30, FIP1, and some other components such as CPSF100 can form a complex involved in poly processing. Together, these findings suggest that the complex composed by CPSF30 and FIP1 may play important roles in nitrate signaling. In the extant literature, key molecular components involved in primary nitrate responses, covering nitrate sensors, transcription factors, protein kinases, and polyadenylation specificity factors, have been identified. Methodologically, this has been achieved by using forward and reverse genetics as well as systems biology approaches . In summary, in the presence of both ammonium and nitrate , NRT1.1 functions as a sensor. NLP7, NRG2, and CPSF30 have been revealed to work upstream of NRT1.1. NRG2 can interact with NLP7 whilst NLP7 can interact with, and be phosphorylated by, CPK10. In addition, NLP7 binds to the promoter of NRT1.1 as revealed by ChIP and EMSA assays. NRT1.1 works upstream of, and regulates, TGA1/TGA4. Furthermore, CIPK23 interacts with and phosphorylates NRT1.1. CPSF30 can interact with FIP1 and regulate the expression of both CIPK8 and CIPK23. NIGT1.1 can suppress NLP7-activated NRT2.1. In the presence of nitrate but absence of ammonium , NRT1.1 works only as a nitrate transporter, but not as a nitrate regulator. The other nitrate regulatory genes, including NRG2, NLP7, CPSF30, FIP1, LBD37/38/39, SPL9, NIGT1s, CIPK8, and CIPK23, still play an important role in the nitrate signaling.

Serving as an important molecular signal, nitrate also regulates plant growth and development and has been particularly well studied in the context of root system architecture. Root system architecture controls the absorption and utilization of nutrients and affects the growth and biomass of plants. Lateral root growth is dually regulated by nitrate availability, including local induction by NO3− and systemic repression by high NO3. Several key genes and miRNAs functioning in nitrate-regulated root architecture have been characterized. The ANR1 gene, encoding a member of the MADS-box family of transcription factors,danish trolley was the first gene to be identified as an essential component in nitrate-regulated root growth. Nitrate can inhibit the growth of lateral roots when seedlings are grown on media with higher nitrate concentrations compared to lower nitrate concentrations . However, ANR1 down regulated lines obtained by antisense or co-suppression exhibited reduced lateral root length when grown on media with various nitrate concentrations, indicating the enhanced sensitivity of lateral root growth to nitrate inhibition in those lines. Over expression of ANR1 in roots resulted in increased lateral root growth and this phenotype was strongly dependent on the presence of nitrate, suggesting post translational control of ANR1 activity by nitrate. Interestingly, the expression of ANR1 in nrt1.1 mutants was dramatically diminished and these mutants exhibited reduced root elongation in nitrate-rich patches, similar to what was observed with the ANR1-repressed lines. This suggests that NRT1.1 works upstream of ANR1 in terms of local nitrate-induced lateral root growth. Recently, the auxin transport role of NRT1.1 was characterized in lateral root primordia when seedlings were grown on media without nitrate or with low nitrate concentrations; under these conditions, NRT1.1 represses the growth of pre-emerged LR primordia and young LRs by inhibiting the accumulation of auxin. Subsequently, Gojon’s lab revealed that the NRT1.1-mediated regulation of LR growth was dependent on the phosphorylation of NRT1.1 and the non-phosphorylated form of NRT1.1 could transport auxin in the absence of nitrate or in low nitrate concentrations. Further investigation indicated that in the presence of nitrate, the promoter activity of NRT1.1 was stimulated and mRNA stability was increased, while protein accumulation and auxin transport activity were repressed in LRPs, resulting in accelerated lateral root growth. Altogether, NRT1.1 offers a link between nitrate and auxin signaling during lateral root development. However, the mechanisms by which nitrate induces the expression of NRT1.1 while repressing NRT1.1 protein accumulation and auxin transport activity in LRPs remain unclear. Previous reports have also documented that several genes involved in hormone biosynthesis or response regulate the root system architecture response to changes in nitrate availability. NRT2.1, a high-affinity nitrate transport gene, is induced by nitrate and sugar. Wild-type seedlings grown on media with high carbon/nitrogen ratios exhibited significantly repressed lateral root initiation compared to a standard growth medium.

However, the repression of lateral root initiation was diminished in nrt2.1 mutants under high C/N ratios where this phenotype was not dependent on nitrate uptake. These results demonstrate that NRT2.1 plays an important role in lateral root initiation under high C/N ratios. In addition, nrt2.1 mutants exhibited significantly reduced shoot-to-root ratios compared to wild-type and nrt2.2 mutant seedlings when grown in common hydroponic conditions . The reductions in shoot-to-root ratios were even greater for nrt2.1 nrt2.2, suggesting that both genes are involved in regulating plant growth with NRT2.1 playing a more important role. Moreover, nrt2.1 mutants exhibit reduced LR growth on media with limited nitrogen and this reduction was more severe in nrt2.1 nrt2.2 double mutant plants, indicating that both genes are important regulators involved in lateral root growth. Recently, Gutierrez’s lab determined that induction of NRT2.1 and NRT2.2 was directly regulated by TGA1/TGA4 in response to nitrate treatment. Further investigation showed that tga1 tga4 plants and nrt2.1 nrt2.2 plants exhibited similarly decreased LR initiation compared with wild-type plants, indicating that NRT2.1 and NRT2.2 work downstream of TGA1/TGA4 to modulate LR initiation in response to nitrate. Lateral root emergence was also affected in tga1 tga4 and nrt2.1 nrt2.2 mutants, and tga1 tga4 mutants displayed larger reductions in LR emergence than nrt2.1 nrt2.2 mutants, revealing that additional pathways are required for LR emergence controlled by TGA1/TGA4 besides NRT2.1 and NRT2.2. Moreover, primary roots in tga1 tga4 mutants were shorter than in wild-type and nrt2.1 nrt2.2 plants, suggesting that the modulation of primary root growth by TGA1/TGA4 is independent of NRT2.1 and NRT2.2. The protein kinase CIPK8 is not only involved in primary nitrate response, but also in long-term nitrate regulation on root growth. In the presence of nitrate, cipk8 mutants exhibited longer primary root length compared to the wild type, indicating that CIPK8 modulates primary root growth in a nitrate-dependent pathway.

Furthermore, the key nitrate regulator NLP7 has also been found to control root growth under both N-limited and N-rich conditions besides its essential roles in the primary nitrate response . nlp7 mutants developed longer primary roots and higher LR density on N-rich media. Interestingly, transgenic lines with over expression of NLP7 also exhibited increased primary root length and lateral root density under 1, 3, and 10 mM nitrate conditions.The underlying inter-phenotype mechanisms regulating root growth in the mutant and over expression lines are still unknown. These findings indicate that NLP7 plays an important role in nitrate-regulated root development. Recently, it has been shown that the Ca2+-sensor protein kinases CPK10, CPK30, and CPK32 are also involved in nitrate-specific control of root development. In response to nitrate, icpk mutants had reduced lateral root primordia density and reduced lateral root elongation compared to the wild type. In the last few years, microRNAs have emerged as important regulators involved in nitrate-regulated root growth. It has been reported that miR167 targets and controls expression of the auxin response factor ARF8, and both miR167 and ARF8 are expressed in the pericycle and lateral root cap. Levels of miR167 were repressed under nitrogen treatment, leading to accumulation of ARF8 in the pericycle. In contrast to wild-type plants, which displayed increased ratios of initiating vs. emerging lateral roots in response to nitrogen treatment, the miR167a over expression lines and arf8 mutants were insensitive to nitrogen in terms of lateral root emergence. These results indicate that the auxin response factor-miRNA regulatory module miR167/ARF8 plays an important role in controlling lateral root growth in response to nitrogen . In addition, miR393 was induced by nitrate treatment, specifically cleaved the auxin receptor AFB3 transcript, and modulated the accumulation of AFB3 mRNA in roots under nitrate treatment . The primary root of the wild type was shorter when treated with KNO3 compared to KCL, however the primary root of the miR393-overexpression line andafb3 mutant were insensitive to nitrate treatments. miR393/AFB3 also controlled lateral root growth as well as primary root growth. The miR393 over expression line and afb3 mutant showed diminished densities of initiating and emerging lateral roots compared to the wild type, which exhibited increased growth of lateral roots in response to nitrate treatments.

Plant growth and yield in natural environments depend on a plethora of interactions with bacteria and fungi

To avoid mortality in the greenhouse, plants received water every three to four days, which differs greatly from natural rainfall patterns, even in the wet season. During the dry season, soil moisture is often between 30% – 80% of sample weight for marsh and low ecotone soil cores in the high ecotone and upland locations, soil moisture accounts for 0% – 30% of sample weight. We have no data showing whether we achieved similar conditions with potting soil in the greenhouse, and regardless, we would expect more rapid drying in pots than for in situ field soil. Thus, as for all greenhouse studies, results presented here should be used with caution when predicting performance in the field. To expand on these results, the greenhouse experiment should be repeated using native marsh soil as the substrate and including higher salinity treatments . Response to treatment in marsh soil should provide a more accurate prediction of response to field conditions. Surprisingly, measured differences in water potential did not translate to plant performance. Neither growth nor survival were visibly affected by watering treatment, even in potentially stressful low volume / high salinity treatments. Existing literature suggests that halophytes concentrate solutes to generate low tissue water potential, allowing continued passive uptake of water. In this case, low tissue water potential is not detrimental, since it prevents or reduces water deficits that can impair growth. Another possible reason for the lack of effect on growth was timing of the experiment. We began the experiment in June, when most individuals were beginning to reproduce. Beyond this point, energy is less likely to be allocated to vegetative growth and more likely to be allocated towards reproduction or survival strategies,aeroponic tower garden system like salt management . In contrast, younger plants allocate the majority of their energy to vegetative growth .

Adaptations, such as salt glands or specialized vacuoles, are energy expensive and require energy normally allocated to growth . Additionally, decreasing water potential has been shown to inhibit cell expansion , which would disproportionately affect young plants, since the rate of cell expansion in mature plants is reduced. Therefore, by better aligning the experimental period with the natural growth period, and focusing on young plants, treatment effects on growth might become more apparent. D. spicata displayed the greatest variability in tissue water potential, and this variability may have been influenced by factors other than watering treatment. D. spicata was grown in shallower, wider pots in a sandier potting medium. In both volume treatments, water would drain quickly through the pots, leading to uneven soil saturation that likely affected treatment efficacy and making it difficult to draw definitive conclusions regarding the large range in water potential. However, low water potential values are not uncommon for D. spicata. Other authors have observed sustained, highly negative water potential used to compensate for soil salinity . The highest D. spicata mortality in our experiment occurred in the drought treatments, with three out of four deaths in the 60% seawater drought treatment. Nonetheless, increased drainage and evaporation rates likely contributed to mortality for this species. E. californica was affected by both the drought and salinity treatments, causing lower water potential and a slight negative effect on growth. Interestingly, our results contrast with those from another study. Jong measured E. californica net dry weight when irrigated with a saline Hoagland solution in sandy soil, using artificial sea salt instead of seawater. The water potential of their maximum salinity treatment was similar to our 60% seawater treatment, but the authors found that dry weight of E. californica decreased significantly as salinity increased.

This experiment used young E. californica seedlings – the first tissue harvest occurred when seedlings were one month old and continued every 8 days until all plants were harvested, with the authors noting a difference in dry weight between treatments . Since we did not observe a difference in above ground biomass, the contrasting results may be due to the misalignment of experiment start time with the natural growth period. F. salina did not show an effect of salinity and drought stress on total plant growth, since biomass was maintained across treatments. In contrast, Barbour’s and Davis’s results showed a decrease in F. salina’s growth as salinity increased, with total mortality at approximately 89% seawater Hoagland solution . Plants in their non-saline control showed the most growth, measured by the length of the main and lateral shoots . The majority of our plants remained constant in size. The high mortality rate across treatments was driven by aphid infestation, despite attempts to control aphids with Botanigard . The highest mortality occurred in the drought, 60% seawater treatment, suggesting that stringent growing conditions may have made plants more susceptible to aphid-induced mortality. J. carnosa was the only species that added biomass between the first and final surveys. However, growth did not differ across treatments . Other studies have found mixed effects of salinity and drought treatments on growth. One study found that J. carnosa grew best in non-saline or minimal saline environments , using recently germinated individuals with stalks that extended 1-10cm above the growing substrate . In contrast, two other studies found that J. carnosa can tolerate salinities twice as concentrated as seawater, but moderate salinity conditions were ideal . St. Omer and Schlesinger used Hoagland solution in a greenhouse experiment to determine that maximum J. carnosa growth, measured by total dry weight, occurred at about 30% – 60% NaCl, with growth decreasing above 60% salinity.

They did not record plant age . The age of the plants likely impacted the differences in growth among studies due to the difference of energy allocation between mature and immature plants, which would have been exacerbated with higher salinity. Barbour and Davis used younger plants, which may have been more sensitive to treatment effects compared to the St. Omer and Schlesinger experiment , and the results reported here. Our experimental results align more closely with those of St. Omer and Schlesinger , even though our experimental design was more similar to Barbour and Davis . The experiment should also be repeated with younger plants to determine if age has any effect on salinity and drought tolerance. Other experiments that used younger plants observed a decrease in growth or total biomass as salinity levels increased, contrasting with our finding that plants are largely unaffected by salinity. Seedlings are more desirable to use in revegetation operations due to the reduced propagation cost and transplant effort, so it is important to determine the range of conditions young plants can tolerate. Our experiment addressed a knowledge gap regarding halophyte salinity and drought tolerance that could inform the design of future restoration projects and experiments in Pacific coast salt marshes. Revegetation efforts often have low success rates due to the stringent abiotic conditions within the ecotone, which disproportionally affect seedlings . Furthermore, the different natural distributions of halophytes within the ecotone suggest that salinity and drought tolerance could vary among species. In our experiment,dutch buckets for sale treatments had negligible effects on growth or survival – only water potential was affected. These results imply that these five species could survive anywhere within the ecotone, by employing different physiological adaptations – such as succulence, salt glands – to withstand stressful conditions. However, our results are likely not representative of plant performance in the field due to a variety of factors. The timing of our experiment did not align with the natural growth period of the plants, causing us to use mature plants rather than young seedlings. Additionally, our use of 60% seawater is not representative of the tidal inundation that some of the species may experience in the field. Therefore, future experiments will examine how these factors influence outcomes, using lessons learned during this effort. Taken together, findings from this set of experiments will allow us to 1) identify zones within the ecotone maximizing survival and establishment on a by-species basis, or 2) demonstrate that species are flexible enough to compensate for conditions across the ecotone,making careful placement of species unnecessary. In either case, these experiments will provide valuable insight to restoration practitioners. Ultimately, we hope that this work will support rapid and robust strategies to recreate thriving salt marsh systems.The microbial community associated with roots was proposed to be assembled in two steps: first, the rhizosphere is colonized by a subset of the bulk soil community and, second, the rhizoplane and the endosphere are colonized by a subset of the rhizosphere community. Intriguingly, a set of recurring plant-associated microbes has emerged. This review focuses on how plants shape their rhizobiome. On the one hand, common factors among plants likely lead to the assembly of the core microbiome. On the other hand, factors specific to certain plants result in an association with microbes that are not members of the core microbiome. Here, we discuss evidence that plant genetic factors, specifically root morphology and root exudation, shape rhizobiomes.

Initial evidence for an influence of plant genotype on rhizobiome composition was that similar rhizobiomes assembled in association with arabidopsis and barley grown in the same experimental conditions, although they displayed different relative abundances and some specific taxonomic groups. A correlation between phylogenetic host distance and rhizobiome clustering was described for Poaceae species, distant relatives of arabidopsis, rice varieties, and maize lines, but not for closely related arabidopsis species and ecotypes. Distinct rhizobiomes were also described for domesticated plants, such as barley, maize, agave , beet , and lettuce , compared with their respective wild relatives. Interestingly, not all plants have a rhizobiome distinct from bulk soil: some species, such as maize and lotus, have assembled a distinct rhizobiome, whereas other species, such as arabidopsis and rice, assembled a rhizobiome similar to bulk soil. The former species display a strong, and the latter a weak rhizosphere effect . The cause ofthis phenomenon is currently unknown. The strength of the rhizosphere effect varies with the developmental stage of the plant. Similarly, root exudation and microbial communities were found to change with the age of the plant. Furthermore, distinct rhizobiomes were associated with different developmental stages of arabidopsis, rice, and Avena fatua grown during two consecutive seasons. Pioneering studies demonstrated the ability of microbes to alter plant development. Overall, it appears evident that host genotype, domestication, and plant development influence the composition of rhizobiomes. As an alternative to plant developmental stage, residence time of plants in soil was discussed as a hypothesis for successive microbiomes. These contrasting results might be partially explained by differing environmental influences, host plants, or soils, and additional work is needed to resolve these questions. In this review, we discuss root morphology and root exudates as two genetic factors shaping plant–microbiome interactions, and we examine the following aspects: how root morphology and border cells affect rhizobiomes; how plant exudates shape the rhizobiome; and possible plant transport proteins involved in exudation. Figure 1 provides a general overview of exometabolite networks in the rhizosphere, and Box 1 illustrates the interplay between root exudates, border cells, and rhizobiomes in phytoremediation. We conclude by integrating these ideas into a possible scenario of rhizobiome assembly.Rhizobiomes are influenced by their spatial orientation towards roots in two ways. First, the radial proximity of microbial communities to roots defines community complexity and composition, as described in recent publications, and as outlined by the two-step model of microbial root colonization mentioned above. Second, the lateral position of microbes along a root shapes the community, as exemplified by early studies . Importantly, recent microbiome studies take into consideration the former, but not the latter aspect. In this section, we discuss specific microbial associations with various root regions, and the role of spatially distinct root exudation. Root tips are the first tissues that make contact with bulk soil: root tips are associated with the highest numbers of active bacteria compared with other root tissues, and likely select microbes in an active manner. The root elongation zone is specifically colonized by Bacillus subtilis, which suggests a particular role of this zone in plant–microbe interactions. Mature root zones feature a microbial community distinct from root tips. Their community includes decomposers, which could be involved in the degradation of dead cells shedding from old root parts.

The potential for artifact introduction should be recognized

In addition, coatings, either on pristine ENMs or acquired in the test media or environment, may alter toxicity.ENM amounts and forms effecting biological impacts should be understood and related to the administered dose to inform environmental risk assessment.This is the essence of dosimetry in ENM ecotoxicology. As with other exposure concerns related to hazard assessment, appropriate dose measurement depends on receptor and ENM characteristics, which are scenario-dependent. For example, mammalian cells are harmed by ENMs that become internalized, yet uptake pathways depend on ENM characteristics.Then again, bacterial receptors that affect ecosystem-level processes may be impacted by externally associated ENMs at the cell membrane, or even in the surrounding environment. In those cases, dosimetry relies on understanding ENM behavior in the complex media in which bacteria reside , which is scenario-driven. End point observations of ENM damage will also depend on ENM processing in cells. During hazard assessments, understanding the history of biological interactions with internalized, or otherwise associated ENMs may not be feasible. Yet efforts should be made to measure and spatially associate ENM bio-burden within biological receptors, and to examine the relationships of applied ENMs to apparent effective dose and to effects.Overall, it is not recommended to categorically exclude select conditions, environmental compartments, protocols, receptors, or end points, since any may be environmentally relevant. Rather, careful experimental designs around well-conceived, plausible exposure scenarios should be emphasized; also, ENM characteristics that influence biological responses under the dynamic conditions that occur in the environment and in biota should be characterized and quantified. One could imagine identifying key material environment system determinants that could be systematically varied to provide test results across relevant determinant ranges. Such ideas are not specific to ENM ecotoxicology,hydroponic bucket but could establish defensible practices for making progress in hazard assessment while the ENM industry rapidly advances.

Mesocosms are “enclosed experimental systems [that] are intended to serve as miniaturized worlds for studying ecological processes.”While the distinctions between mesocosms and other experimental systems are not well delineated, mesocosms are generally larger experimental units and inherently more complex than bench top microcosms or more simplified laboratory experiments.Mesocosms for ENM ecotoxicology are intended to increase the complexity of experimental systems, such that more realistic ENM physical compartmentalization, speciation,and uptake into biota can be achieved alongside biotic effects.Also, the intent is to realistically characterize ENM fates and interactions with environmental system components, and to reveal fluxes among compartments of the ecosystems responsive to internal system influences that are unconstrained by investigator interventions.Mesocosms have been used for testing relative biotic effects of ENM variants ,and discerning ENM effects separately from effects of dissolution products .Mesocosm testing may occur following individual organism and microcosm studies . For example, to study how ENMs impact crops, one could first establish the potential for hydroponic plant population impacts,use soil microcosms to understand ENM bio-availability via observing soil microbial community shifts,and then scale up to greenhouse mesocosms of soil-grown crops. This sequence could provide an understanding of plant−microbe interactions,ENM transformation and uptake in plants,and effects on food nutritional quality.Still, there are relatively few published studies using mesocosms to assess ENM ecological hazards,and the design and operating variables of existing mesocosm studies are wide ranging.By contrast, wastewater-associated ENMs,and their transformations,effects, and fates in wastewater treatment plants ,along with the potential for ENMs to impact WWTP processes,have been more extensively studied. Since sewage contains ENMs, WWTPs are inherent forms of mesocosms.

Studies at entire WWTP scales elucidate ENM fates during wastewater treatment, including significant association with biological treatment biomass that becomes bio-solids.However, only 50% of bio-solids produced in the U.S. are land-applied, and these bio-solids are used on less than 1% of agricultural land in the U.S. . Bio-solids are land applied even less in the European Union.Thus, knowledge of ENM fates in WWTPs and how final residues are disposed regionally are needed to develop plausible exposure scenarios. Concerns with mesocosms include factors that can be difficult to control and that mesocosms may respond to artifacts including “wall” or “bottle” effects.Further, mesocosms can conflate direct and indirect toxicant effects, typically do not have a full complement of control conditions, and deliver inconclusive results . Biological communities in mesocosms also lack realistic ecological interconnections, interactions, and energy flows. Nevertheless, outcomes can be improved by using carefully designed mesocosms and associated experiments.For example, combined with analyzing mesocosm samples, performing practical “functional assays” such as for heteroaggregation,allows for anticipating phenomena and later interpreting ENM transformation and compartmentalization in mesocosms.Similarly, batch physical association experiments if conducted using realistic components, and over time frames that allow for quantifiable mass transfer can assess ENM biomass association and readily suggest ENM fates in WWTPs.Still, hydrodynamic conditions are different in simplified tests versus mesocosms, which are different from those in the natural environment. Hydrodynamic conditions will impact ENM fate and transport and thus exposure concentrations at receptor boundaries. The inability to capturereal environmental hydrodynamic conditions in any experimental scale is a general shortcoming for both ecotoxicology and transport studies.Although mesocosms do not fully simulate real environments,mesocosms are useful and should be employed, albeit judiciously due to their resource intensity, within a strategy . Recommendations regarding using mesocosms for assessing ENM environmental hazards are provided in Table 2. Mesocosm studies must be designed and conducted around well-conceived questions related to plausible exposure scenarios; they should use select end points, potentially including sensitive omics measurements, to answer questions or test hypotheses.Internal process and constituent characterization should be thorough and equally responsive to well-conceived, realistic scenarios.

Functional assays, that is, “intermediary, semi-empirical measures of processes or functions within a specified system that bridge the gap between nanomaterial properties and potential outcomes in complex systems”, should precede mesocosm designs and experiments, and aid interpreting mesocosm results . Mesocosm artifacts are avoidable by following best practices for design and operation, although possible interferences of particulate material testing with assays must be evaluated.As for other hazard assessments, ENMs should be tested across the product life cycle, within a motivating exposure scenario. Similarly, suitable material controls should be used to test hypotheses regarding ENM-specific effects . The recommendations made regarding exposure conditions in assessing ENM hazard potentials for model organisms should be followed for mesocosm studies . Additionally, mesocosm designs should incorporate exposure durations, which should be sufficiently long to address population growth, reproduction, bio-accumulation, trophic transfer, and possibly transgenerational effects. Sufficient measurements of ENM concentrations and time dependent properties must be made for clear interpretations. Key to successfully interpreting mesocosm studies is using validated methods for measuring ENMs in complex media. Measurements should include the size distribution, concentration and chemical composition of ENMs in the test system,stackable planters including biological tissues,over time.In some cases, transformation products are inventoried thoroughly during long-term field-relevant exposures.Detection schemes require sample preparation to assess in situ exposures before quantitative analyses, or drying and embedding before visual confirmation by electron microscopy.Recovery methods continually develop, such as cloud point extraction for concentrating ENMs from aqueous matrices.Depending on the exposure scenario, in situ aging may be a study objective. However, it is important to define what “aging” really means and the specific application domain, since “aging” is a wide-ranging term and can be used in different contexts, making comparisons impossible. At least, studies should be undertaken over sufficiently long time frames , which may include repeated ENM applications,such that appropriate aging, that is, time-dependent transformation under realistic conditions, could occur. Alternatively, preaged ENMs could be used. However, preaging protocols are not yet standardized and, while some convention could allow for comparing across studies, the appropriate aging protocol would depend on the envisioned exposure scenario. Cocontaminants should be considered and potentially introduced into mesocosms, since some ENMs sorb, concentrate, and increase exposure to other contaminants.Select end points should account for ENMs as chemosensitizers.Also, mesocosm study designs should anticipate and plan for measuring secondary effects . In summary, while few mesocosms have been used in assessing ENM ecotoxicity and are also rare for conventional chemical testing, such systems potentially offer greater realism. Still, mesocosm exposure and design considerations should derive from immediate environmental applicability. The value of mesocosms to ENM ecotoxicology can increase by following recommendations including: addressing context-dependent questions while using relevant end points; considering and minimizing artifacts; using realistic exposure durations; quantifying ENMs and their products; and considering ENM aging, cocontaminants, and secondary biological effects . Further, it should be acknowledged that mesocosms do not fully recreate natural environmental complexity. For example, aquatic mesocosms do not recreate actual environmental hydrodynamic, or temperature cycling, conditions. Hydrodynamics can significantly impact ENM aggregation or heteroaggregation, and fate and transport . Therefore, potential impacts on the resulting concentrations at the receptor boundaries should be considered.ENM environmental exposure conditions herein refer to where, how much, and in what forms ENMs may occur in the environment.

These are central issues for ecotoxicology of ENMs because they suggest test exposure scenarios in which ENMs could impact biological receptors within environmental compartments influenced by various factors . These issues also influence outcomes of key regulatory concern: persistence, bio-accumulation, and toxicity.Discharges underpin exposure scenarios,are initiated by situational contaminant releases , and are referred to as source terms. Mass balance-based multimedia simulations mathematically account for released contaminants as they are transported and exchanged between environmental media, where contaminants may be transformed and may ultimately concentrate, potentially with altered compositions and structures . Far-field exposure modeling approaches vary by question, the modeling purpose , the required spatial resolution , the temporal conditions , and the predictive accuracy required.Material Flow Analysis , which is a type of life-cycle inventory analysis, has been advanced to track ENM flows through various use patterns into volumes released into broad environmental compartments,scaled to regional ENM concentrations that release via WWTPs to water, air, landfills, and soil.Such models estimate exposure concentrations in part via engineering assumptions and in part via heuristics .Also, such material flow analysis models depend on the underlying data which are not readily available, making it difficult to validate model results and potentially leading to inaccurate estimates.Multimedia models for ENMs can predict environmental concentrations based on sources of continuous, time-dependent, or episodic releases and are similar to multimedia models that predict environmental concentrations of organic chemicals and particle-associated organic chemicals.For ENMs, predicting particle size distribution as affected by particle dissolution, agglomeration, and settling is desired for various spatial and temporal end points. For one integrated MFA and multimedia model , user-defined inputs are flexible around product use and ENM release throughout material life cycles.It is noted that although validation of multimedia models is a formidable task, various components of such models have been validated as well as model predictions with such models for particle-bound pollutants. Most far-field models of ENMs have major challenges. First, the quantities and types of ENMs being manufactured are unknown to the general public due to issues surrounding confidential business information, leading to a reliance on market research.The resulting public uncertainty will persist while nanotechnology continues a course of rapid innovation, as is typical of new industries.The rates of product use and ENM releases at all life-cycle stages are also not defined.There are challenges associated with modeling transport processes through specific media and across media , highly divergent time scales of processes, lack of required input parameters, and the need for validation of results .Several multimedia models developed for conventional chemicals could be adapted around ENMs, but few account for fate processes specific to nanoparticles .In addition, various transport models for a single medium and in the multimedia environment could be adapted for far-field analysis of ENMs, but few account for fate processes distinctive to ENMs .Moreover, their validation, which would require ENM monitoring data, is a major challenge. The lack of understanding of many fundamental ENM behaviors under environmental conditions propagates into broad uncertainties, for example in predicting ENM removal to solids or aqueous fractions in WWTPs.ENM surface chemistries fundamentally affect ENM agglomeration or dispersion and likely affect bio-availabilty.Some species on ENM surfaces may degrade in the environment,while other adsorbates can be acquired.Carbonaceous ENMs may be transformed or degraded by environmental processes such as photo-,enzymatic,chemical,and bio-degradation.Redox and other environmental conditions will affect nanomaterial surfaces, which for nano-Ag includes formation of sulfide that inhibits dissolution.Surface chemistry also affects transformation rates of primary particles and aggregates .

Nitrate–nitrogen leaching accounted for only 15% of the applied nitrogen

As the season continued and plant uptake was reduced, excess water further mobilised nitrate–nitrogen out of the root zone, as is evident from 27/04/07 and beyond . At the end of the crop season, little nitrogen remained in the soil system, and what did remain was well beyond the reach of the plants. This nitrogen is expected to continue leaching downwards over time and become a potential source of nitrate–nitrogen loading to the ground water. Additionally, peak NO3 –N concentrations in the soil profile L 1 ) and in drained water L 1 ) were significantly higher than the Australian environmental standard for protection of 80% NO3 –N L 1 ) and 95% of species NO3 –N L 1 ). The NO3 –N concentrations in the soil solution also occasionally exceeded the level of Australian drinking water quality standard for nitrate NO3 –N L 1 ). High levels of nitrate–nitrogen below the crop root zone are undesirable, as some recharge to groundwater aquifers can occur, in addition to flow into downstream rivers, which are used for drinking water and irrigation. These findings are consistent with other studies , in which high nitrate concentrations in drainage water under drip and furrow fertigated irrigation systems have been reported.The seasonal water balance was computed from cumulative fluxes calculated by HYDRUS-2D. Estimated water balance components above and below the soil surface under a mandarin tree are presented in Table 4. It can be seen that in a highly precise drip irrigation system, a large amount of applied water drained out of the root zone,hydroponic nft system even though the amount of irrigation applied was based on estimated ETC. This drainage corresponded to 33.5% of applied water, and occurred because highly permeable light textured soils, such as those found in this study, are prone to deep drainage whenever the water application exceeds ETC.

The drainage amount in our study falls within the range of recharge fluxes to groundwater reported by Kurtzman et al. under citrus orchards in a semiarid Mediterranean climate. Mandarin root water uptake amounted to 307.3 mm, which constitutes about 49% of applied water. Root water uptake slightly increased when the model was run without considering solute stress , which is not a significant difference. It further substantiates the results obtained for seasonal ECsw in Fig. 6, where salinity remained below threshold over the season. Evaporation accounted for 17.7% of the total water applied through irrigation and rainfall. The modelling study overestimated the sink components of the water balance by 4.79 mm . There were major differences between water input and output from January 2007 onwards . During this period, irrigation and precipitation significantly exceeded tree water uptake , which eventually resulted in deep drainage from March 2007 onwards. Therefore, current irrigation scheduling requires adjustment during this period. This illustrates how simulations were helpful in evaluating the overall water dynamics in soil under the mandarin tree. The nitrogen balance is presented in Table 5. The nitrogen fertilizer was applied either in the form of NH4 + or NO3 , but NH4 + transforms quickly to NO3 through the process of nitrification. Model simulations showed that nitrification of NH4 + was very rapid and most of the NH4 + –N converted to NO3 before it moved to a depth of 20 cm, and no traces of NH4 + were observed below this depth. It is apparent that the nitrification of NH4 + took place in the upper soil layer, which contains organic matter and moisture that supports microorganisms , facilitating the nitrification of NH4 + . Though NH4 + was initially nitrified to NO2 and consequently to NO3 , NO2 was short-lived in the soil and decayed to NO3 quickly. Therefore, the simulated plant NH4 + –N uptake was only 0.71 kg ha 1 . Hence, the NO3 –N form was responsible for most of the plant uptake, corresponding to about 85% of the applied nitrogen.

The monthly N applications were slightly higher than plant uptake during the flowering and fruit growth periods . However, the monthly uptake was slightly higher than the N application between these periods. High frequency of N applications in small doses resulted in similar nitrogen uptake efficiency in citrus as in other studies . Similarly, Scholberg et al. reported doubling of nitrogen use efficiency as a result of frequent application of N in a dilute solution. Slightly higher uptake was recorded when fertigation was applied in second last hour of an irrigation event , as compared to when it was applied early in the irrigation event . Hence, it can be concluded that timing of fertigation does not have a major impact in a normal fertigation schedule with small and frequent N doses within an irrigation event in light textured soils. Similar results were also obtained in our earlier study in a lysimeter planted with an orange tree , which revealed that timing of fertilizer N applications in small doses in an irrigation event with a low emitter rate had little impact on the nitrogen uptake efficiency.Monthly N balance revealed that most of the N leaching happened between March 2007 and August 2007, which was correlated with the extent of deep drainage occurring during this period. NO3 –N losses ranging from 2% to 15% were illustrated by Paramasivam et al. and Alva et al. , attributable in part to an improved management of N, which could be a contributor in the current estimation.In our study, it is evident that there were significant deep drainage and nitrate–nitrogen leaching losses , which could be reduced by appropriate management. Hence, different simulations involving the reduction of irrigation and fertigation applications during the whole or part of the crop season were conducted, to optimize water and nitrogen uptake and to reduce their losses from the soil . Increasing the irrigation frequency with short irrigation events while maintaining the same irrigation volume, had no impact on deep drainage and N leaching .

However, the seasonal salinity increased by 11% compared to the standard practice. This confirms that the current irrigation schedule followed with respect to the irrigation frequency seems to be optimal under the experimental conditions. In S2, Dr_W and Dr_N were reduced by 14.4% and 19%, respectively, but salinity increased by 11%. However, a sustained reduction in irrigation by 20% eventually reduced the Dr_W and Dr_N by 28.1 and 38.3%, respectively, at the expense of a 4.9% decline in plant water uptake, but with a 4% increase in N uptake. However, salinity increased by 25.8% compared to the normal practice, which would likely have a significant impact on plant growth. Scenarios S4 and S5 were based on decreasing the nitrogen application by 10% and 20%,nft channel resulting in a decrease in N leaching by 7.4% and 14.8%, respectively, along with a much higher reduction in plant N uptake , suggesting that the reduction in the fertilizer application alone is not a viable option to control N leaching under standard conditions. A combined reduction in irrigation and fertigation by 10% further reduced N leaching by 5.5%, compared to reducing irrigation alone , but at the same time plant N uptake was reduced by 5% more than in S2. Similarly, reducing irrigation and N application by 20% produced a pronounced reduction in N leaching and water drainage , but it also resulted in a decrease in plant N uptake by 15.8% and water uptake by 4.8%, compared to normal practice. At the same time, salinity increased by 25.8%, which is similar to S3. The reduction in plant water and N uptake would have a major impact on plant growth and yield, and would adversely impact the sustainability of this expensive irrigation system. Hence, reducing fertilizer applications does not seem to be a good proposition under the current experimental conditions, as it results in an appreciable decline in plant N uptake. However, Kurtzman et al. reported that a 25% reduction in the application of N fertilizer is a suitable agro-hydrological strategy to lower the nitrate flux to groundwater by 50% under different environmental conditions. Rather, reducing irrigation alone seems to be a better option to control the deep drainage and N leaching losses under the conditions encountered at the experimental site. Additionally, it is worth noting that in S3 and S7 the salinity during a period between October and December at a depth of 25 cm, and during December at a depth of 50 cm, increased considerably, and was higher than the threshold level , confirming that a sustained reduction in irrigation and fertigation is not a viable agro-hydrological option for controlling water and N leaching under the mandarin orchard.

However, it seems unnecessary to reduce irrigation applications uniformly across the season as suggested by Lido9n et al. . Rather, irrigation could more profitably be reduced only during a particular time period when excess water was applied. The water and N balance data in our study revealed that an imbalance between water applications and uptake happened during the second half of the crop season, i.e., from January till August 2007, resulting in maximum drainage and N leaching , coinciding with the fruit maturation and harvesting stage. Hence, there is a need to reschedule irrigation within this period, rather than reducing water applications throughout the entire season. Keeping this in mind, the following 5 scenarios were executed, in which irrigation was reduced during the second half of the crop season, i.e., between January and August, by 10%, 20%, 30%, 40%, and 50%, respectively. Scenarios S10, S11, and S12 showed an enormous potential for reducing water and N losses. In S10, Dr_W and Dr_N were reduced by 8% and 4% more than in S7, N uptake was increased by 6.9% , and salinity was also 4% less than in S7, which seems quite promising. On the other hand, in S11 and S12, the Dr_W and Dr_N were reduced to a greater extent than in S10, and soil salinity increased substantially , due to a considerable reduction in the leaching fraction. This is also shown in Fig. 12, which shows that monthly soil solution salinity in S11 and S12 at the 25 and 50 cm soil depths increased dramatically between January and August. Although ECsw remained below the threshold level, except at a 50 cm depth in S12 during March 2007, there is a significant likelihood of it increasing further in subsequent seasons, which would ultimately impact the growth and yield of mandarin trees. Hence, under current conditions, Scenario S10 represents the best option to control excessive water and N losses, and high salinity, and to increase the water and N efficiency for mandarin trees. Other permutations and combinations, involving fertilizer reductions along with S10, did not provide further improvements in controlling water and N leaching. It is concluded that simulations of irrigation and fertilizer applications, using HYDRUS, can be helpful in identifying strategies to improve the water and N efficiency for drip irrigation systems of perennial horticultural crops.Climate change, continuing population growth, and urbanization are exerting an unprecedented pressure on fresh water supply, mandating the use of nontraditional water resources , such as treated municipal wastewater, for agricultural irrigation . Treated wastewater irrigation, however, poses potential risks because a multitude of trace contaminants, including numerous pharmaceutical and personal care products , are incompletely removed during wastewater treatment and may enter the soil-plant continuum . Land application of bio-solids and animal manure constitutes yet another route for such trace contaminants to enter agroecosystems . Once in the agroecosystems, PPCPs may be translocated into edible plant parts and thus, enter the terrestrial food chains, including human diet. Consequently, plant accumulation of PPCPs is raising widespread concerns due to potentially deleterious effects on the environment and human health . Studies over the past decade show that various PPCPs can be taken up from soil by plants . For instance, Wu et al. detected 16 PPCPs in edible tissues of eight common vegetables grown with treated wastewater irrigation. Additional studies showed that some PPCPs can pose significant phytotoxicity, leading to inhibition of plant growth . For example, carbamazepine, an antiepileptic drug, displayed phytotoxic effects toward Cucurbita pepo at concentrations N 1 mg kg−1 in soil .

We are interested in the determination of protein structures in solution for several reasons

By analysing the intensities of many cross peaks in the spectrum, and paying particular attention to those at the site of the gap in the backbone, we have determined that the backbone is relatively undistorted, as compared to a standard “B DNA” even at the site opposite the gap. There does appear to be a slight over winding of the DNA at this site, giving a slightly greater than average local helix twist angle. We have also used the temperature dependence of the NMR spectrum to examine the melting in these sequences. We find that, as expected, melting occurs first as a separation of the dimer into monomers, followed at higher temperature by opening of the hairpin loop. For sequences with 4 GC base pairs in the overlap region, and six in the stem of the hairpin loop these melting events are separated by about 15°C, with the first transition having slower exchange kinetics than the latter. We have previously carried out NMR studies of the binding of the antibiotic distamycin-A to a specific DNA dodecamer. This drug shows a strong preference for binding at AT rich sequences, AATT in the cases which we have studied in detail. From the NMR data it was possible to position the drug relative to the DNA, and to qualitatively evaluate the degree of distortion of the DNA by the drug. We have now supplemented these measurements by carrying out energy minimization calculations with the AMBER program from UCSF. A general problem with such’ calculations is that the minimization algorithm cannot find a global minimum of energy in a reasonable time. We find that starting with the drug relatively far from the position determined from NMR,nft hydroponic convergence is obtained, but to a drug binding position which is far from that experimentally determined. However, when the calculation is started near the experimentally determined structure the energy obtained upon convergence is lower.

The coordinates obtained through such experimentally guided modelling are better than what can be determined from the NMR data alone. Presently we are trying to extend the measurements to other DNA sequences, and to analyse the differences in binding constant and kinetics of drug binding in light of the detailed structural information we now have about the bound state.First, we expect to be able to use cystines at specific· residues in proteins to predictably fold the remainder of the sequence. The expectations are based upon the observed folding in several naturally occuring peptides, which have common cystine positions. While we have previously assigned the resonances in two of these peptides, and qualitatively modeled their secondary structure, it is important to carry out a more quanititative analysis. To do this we have collected 2D NOE spectra and integrated cross peak intensities to obtain interproton distances for apamin, a small neurotoxic peptide from honey bee venom. With a relatively large number of such distances we have carried out distance geometry calculations. This approach takes the distance estimates,. including estimates for the experimental precision, and computes from these coordinates for structures which are consistant with all of the input data. From the range of structures obtained from repeated calculations we can analyse the precision with which the structure is determined, e.g. its effective “resolution”. In a similar fashion we have begun analysis. of the relationship between structure, function and immuno genicity of protein toxins isolated from sea anemones. These again are related through having common cystine positions, and several other conserved residues. However different toxins from this family are active against different receptors, and do not all show antibody cross reactivity.

We have now fully assigned resonances of the Radianthus paumotensis toxin II, and have established that its only regular secondary structure is beta sheet, with strands connected with a variety of loops and turns. We will now begin structure calculations for this protein, and at the same ti~e have begun assigning a related protein Rp III. We have just obtained the sequence of Rp III through a collaboration with Prof. Ken Walsh at the University of Washington, and have already established that there is a high degree of homology in the structures. From the refined structures for proteins in a related family, such as these, we should be. in a good position to look for the common structural features which give rise to the similar folding of the peptide chain, and yet be able to see the differences which lead to different activities, and immuno genicity. Phosphorus-31 NMR. spectroscopy is evolving into an important means for determining the in vivo concentrations of phosphorylated metabolites and has definite clinical implications and applications. Our previous contributions to this field demonstrated the feasibility of employing implanted radio frequency coils around organs of laboratory animals to permit elliciting the NMR. spectra over long periods to establish base line spectra. Using these devices and techniques we have determined phosphorus exchange reactions in rat hearts and kidneys, in situ, and. have demonstrated that there are pools of metabolic intermediates that are not directly visible -in the NMR. spectra. Comparison of the results from NMR spectroscopy with those obtained from radio labeling studies on Chick Embryo Fibroblasts also showed that there are significant pools of phosphorus not visible in the conventional P-31 NMR. spectrum. Both sets of studies suggest that compartmentation occurs. The invisibility of these pools is assumed to arise because of immobilization of the molecules by cellular macromolecules or organelles. The methods of solid state NMR. spectroscopy are being applied to render visible these solid like species. In particular we use the technique of magic angle sample spinning, together with cross polarization for signal enhancement.

Application of these methods to a large number of biological phosphorylated molecules, for which crystal structure data are avai1~ble, has permitted us to correlate the values of the chemical shielding tensor elements with details of chemical bonding within the phosphate moieties. Vpon application to lyophylized tissue, we observe phosphorus signals attributable to phospholipid head groups. The proton spectra of lyophylized tissue, ellicited with these techniques, are suprisingly rich and exhibit narrow features reminiscent of solution spectra. These narrow features are assigned to hydrocarbon chains of the membrane phospholipids of the tissue. Further support for such an interpretation is provided by the C-13 spectrum of these samples whose features are completely compatible with those of lipid chains .. We interpret these suprising findings to result from the fact that at the temperature of observation, ca room temperature, the membrane phospholipids of the tissue are in the liquid crystal state characteristic of their molecular composition. Normal functioning of the cellular membrane, as exemplified by the fluid-mosaic model,nft system is assumed to require a high degree of dynamic mobility. That we observe such high resolution proton spectra in lyophylized tissue is indeed dramatic support for such a model. The Chemical Bio-dynamics Division of LBL has inaugurated Tritium NMR. spectroscopy in conjunction with the establishment of the National Tritium Labeling Facility. The potential applications of TMR to problems in structural biology and biophysics are very great. They promise to extend the molecular weight range of molecules that can be profitably studied with NMR by several fold, will permit the study of interactions between en~ymes and bound substrates, between receptors and effectors, and between proteins and nucleit acids. This potential derives from the facts that the intrinsic sensitivity of· the triton is some 7% greater than that of the proton, that there will be zero interferring background signals and because the tritium spectrum will be sparse, arising only from those tritons at the sites specifically labeled. Importantly, the abundant protons can be decoupled from the tritons thus reducing their contribution to resonance broadening. We are able to work at the millicurie level of activity. In a first application of TMR to a biological problem, we have observed the conversion of glucose, tritiated at the C-l position, to lactate upon incubation with human erythrocytes. During this metabolism, additional resonances appear transiently indicative of metabolitic intermediates.

The most remarkable aspect of these spectra is the ability to observe the tritiated hydroxyl species at micromolar levels in the presence of 55 molar water without interferring background. During June of 1986 a new 300 MHz NMR spectrometer, specifically configured for optimum utility with TMR was installed in the laboratory.Our objective is to develop a molecular model for chemical mutagenesis from in vitro and in vivo studies of replication and transcription of chemically modified DNA templates. Many carcinogenic as well as chemotherapeutic agents cause covalent linkages between complementary strands of DNA. Cross linked DNA is a block to DNA replication which, if unrepaired, constitutes a lethal lesion to the cell. While the subject of DNA cross link repair has been an area of intensive study, the molecular events of this process have not been well characterized. Genetic studies of E. coli have demonstrated that ABC excision nuclease, coded for by the three unlinked genes uvr A, uvr B, and uvr C, plays a crucial role in DNA crosslink repair. To study the molecular events of ABC excision nuclease mediated DNA crosslink repair we have engineered a DNA fragment with a psoralen-DNA interstrand crosslink at a defined position, digested this substrate with the pure enzyme, and analyzed the reaction products on DNA sequencing gels. We find that the excision nuclease a) cuts only one of the two strands involved in the crosslink, cuts the crosslink by hydrolyzing the ninth phosphodiester bond 5′ and the third phosphodiester bond 3′ to the cross linked furan-side-thymine, and c) does not produce double strand breaks at any significant level. We have constructed a substrate for the ABC excision from 6 oligomers which were then ligated together to form a 40 bp DNA fragment containing the central 8-mer, TCGT*AGCT, in which the internal thymine is modified on the 3′ side by a proralen derivative 4′-hydroxymethyl- 4,5′ ,8-trimethylpsoralen . Both pyrone and furan monoadducts have been isolated, the latter of which reacts with the thymine in the opposite DNA strand to form a crosslink. The cross linked 40 bp DNA fragment was then purified on a denaturing polyacrylamide gel. The cross linked DNA was terminally labelled at either the 5′, 3′, of both termini, digested with ABC excision nuclease and the reaction products where analyzed on a DNA sequencing gel.The genome of human cells contains approximately to 9 nucleotide pairs organized into a particular sequence. The faithful replication of this amount of information into each daughter cells is obviously a formidable task. A clue as to how the· genome sequence is normally maintained in such a highly organized state during DNA replication is to learn something about the factors that destabilize the replication of the genome. Our research program centers its activity on the hypothesis that much of the control of genome replication takes place at the level of initiation of DNA replication within sections of the genome. We are interested in understanding what cellular factors regulate this initiation, how this regulation breaks down in various disease states, and how external environmental stresses can lead to aberrant initiation of DNA replication resulting in increased gene copy number. Because the human genome is so large and the mechanisms regulating its replication likely to be so complex, we have attempted to develop model systems which can help us understand these processes. One approach we have taken is to study the control of oncogenes thought to be involved with regulating the commitment of cells to DNA synthesis. The second study we have been carrying out is to study the aberrant initiation of DNA synthesis that results during gene amplification, and finally we have been investigating the effect of chemical carcinogens on DNA replication and mutation. We are investigating the involvement of protooncogene sequences in the regulation of initiation of DNA synthesis in cells growing in culture. Our hypothesis is that these sequences code for components the cells need to traverse the cell cycle and initiate DNA synthesis. We have focussed our attention on a member of the myc family of oncogenes – N-myc. The Nmyc oncogene is amplified and/or expressed at a high level in many cell lines derived from neuronal tumors; non-neuronal cells apparently do not express this gene.

We represent this as the Lifecycle Performance Assurance Framework

The inoculation treatments were control, indigenous mycorrhiza, G. mosseae, G. etunicatum, G. intraradices, G. caledonium, G.fasciculatum and a mix of these species. The seedlings were grown in a greenhouse for 32 days before being transferred to the main field plots. The experimental plots were randomized with three replicates. Each crop species was tested in a separate experiment. Seedling survival yield and nutrient uptake were measured. Fruits were collected several times and leaves and root samples were analyzed for nutrient content at flowering. Roots were stained and examined for the presence and degree of mycorrhizal infection according to Gioannetti and Mosse . This document provides best practice guidance and energy efficiency recommendations for the design, construction, and operation of high-performance office buildings in India. Through a discussion of learnings from exemplary projects and inputs from experts, it provides recommendations that can potentially help achieve enhanced working environments, economic construction/faster payback, reduced operating costs, and reduced greenhouse gas emissions. It also provides ambitious energy performance benchmarks, both as adopted targets during building modeling and during measurement and verification . These benchmarks have been derived from a set of representative best-in-class office buildings in India. The best practices strategies presented in this guide would ideally help in delivering high-performance in terms of a triad—of energy efficiency, cost efficiency,4x8ft rolling benches and occupant comfort and well-being. These best practices strategies and metrics should be normalized—that is, corrected to account for building characteristics, diversity of operations, weather, and materials and construction methods. Best practices should start by using early design principles at the whole building level.

Optimal energy efficiency can be achieved through an integrated design process , with stakeholder buy-in from the beginning at the conceptual design phase. Early in the project, the focus of the stakeholder group should be on maximizing energy efficiency of the building as a whole, and not just on the efficiency of an individual building component or system. Through multi-disciplinary interactions, the design team should explore synergies between systems such as mutually resonating strategies; or sweet spots between inharmonious strategies. Buildings are the most energy efficient when designers and operators ensure that systems throughout the building are both efficient themselves, and work efficiently together. Systems integration and operational monitoring at the whole building level can help push the envelope for building energy efficiency and performance to unprecedented levels. Whole-building systems integration throughout the building’s design, construction, and operation can assure high performance, both in terms of ensures the energy efficiency and comfort/service levels. A Life cycle Performance Assurance Framework emphasizes the critical integration between the buildings’ physical systems and the building information technologies. The building physical systems include envelope, HVAC, plugs, lighting and comfort technology systems. Whereas, building information technologies provide information on the design and functioning of the building physical systems. This can be done- first, by performing building energy simulation and modeling at the design phase one can estimate the building’s energy performance and code compliance; second, by integrating controls and sensors for communications, one can track real-time performance at the building phase, relative to the original design intent; and third, by conducting monitoring-based commissioning and bench marking during operations, one can ascertain building performance compared to peers and provide feedback loops. The next step should be asesssing best practices at the systems and components level along four intersecting building physical systems- Mechanical Systems for Heating, Ventilation and Air Conditioning , Plug Loads, Lighting and Envelope/Passive systems. The qualitative best practices described in this guide offer opportunities for building designers, owners, and operators to improve energy efficiency in commercial office buildings.

Although the practices are presented individually, they should not be thought of as an “a la carte” menu of options. Rather, building systems must be integrated to realize the maximum energy and cost benefits. Also, designers and engineers, and developers and tenants need to work together to capitalize on the synergies between systems. Last but not the least, this guide provides tangible quantitative best performance metrics, ready to be adopted by buildings in India. These metrics are concrete targets for stakeholder groups to work together and enable, by providing localized and customized solutions for each building, class, and occupants. Having targets early on in the design process also translates to more-efficient design lead times. The potential benefits of adopting these metrics include efficient operations, first-cost and life cycle cost efficiencies, and occupant comfort and well-being. The best practice strategies, if used thoughtfully provide an approach towards enabling office buildings that would deliver throughout their entire life cycle, a flexible optimization of energy consumption, productivity, safety, comfort and healthfulness. The adoption of the qualitative and quantitative goals, would provide an impetus to scale up and market transformation toward energy-efficient processes, resources, and products- in addition to generating positive outcomes on global warming and societal benefits.Buildings in India were traditionally built with high thermal mass and used natural ventilation as their principal ventilation and cooling strategy. However, contemporary office buildings are energy-intensive, increasingly being designed as aluminum and glass mid- to high- rise towers . Their construction uses resource-intensive materials, and their processes and operations require a high level of fossil fuel use. A large share of existing and upcoming Indian office space caters to high-density of occupancy and multiple shift operations. Whereas the average for U.S. government offices is 20 m2 /occupant and for US private sector offices is 30 m2 /occupant, Indian offices have a typical density of 5–10 m2 /occupant. Business Processing Office spaces have three-shift hot seats—a situation that while conserving space because of its multiple usage also leads to considerably higher EPI levels.

Moreover, with the increased demand for commercial office spaces from multinationals and IT hubs, and the current privileges being accorded to Special Economic Zones , the trend is toward larger buildings with international standards of conditioned spaces, dramatically increasing the energy footprint of Indian offices .Building energy consumption in India has seen an increase from 14% of total energy consumption in the 1970s to nearly 33% in 2004-2005. The gross built-up area added to commercial and residential spaces was about 40.8 million square meters in 2004-05,flood and drain table which is about 1% of annual average constructed floor area around the world and the trends show a sustained growth of 10% over the coming years, highlighting the pace at which the energy demand in the building sector is expected to rise in India. In 2004– 2005, the total commercial stock floor space was ~516 million m2 and the average EPI across the entire commercial building stock was ~61 kWh/m2 /year. Compare this to just five years later in 2010, when the total commercial stock floor space was ~660 million m2 and the average EPI across the entire commercial building stock almost tripled to 202 kWh/m2 /year . Energy use in the commercial sector is indeed exploding, not just due to the burgeoning of the Indian commercial sector- India is expected to triple its building stock by 2030 , but also through the increase in service-level requirements and intensity of energy use. Thus there are two intertwined effects: an increase in total building area and an increase in the EPI. According to India’s Bureau of Energy Efficiency , electricity consumption in the commercial sector is rising at double the rate of the average electricity growth rate of 5%–6% in the economy. To deliver a sustained rate of 8% to 9% through 2031-32 and to meet life time energy needs of all citizens, India would need to increase its primary energy supply by 3 to 4 times and electricity generation capacity about 6 times.According to UNEP, approximately 80%–90% of the energy a building uses during its entire life cycle is consumed for heating, cooling, lighting, and other appliances. The remaining 10%–20% is consumed during the construction, material manufacturing, and demolition phases. To manage and conserve the nation’s energy, it is imperative to aggressively manage building energy efficiency in each commercial building being designed and operated in India. By increasing energy efficiency in buildings and other sectors such as agriculture, transportation, and appliances, it is estimated that the total Indian power demand can be reduced by as much as 25% by 2030.

To this end, the best practices outlined below identify processes and strategies to boost the energy efficiency in buildings, while also focusing on cost efficiency and occupant comfort.Just as no two buildings are identical, no two owners will undertake the same energy management program. It is also improbable to include all the listed best practices into one building, since some of them will conflict with each other. The practices are presented individually; however, they should not be thought of as an “a la carte” menu of options. Rather, designers and engineers, developers, and tenants need to work together to capitalize on the synergies between systems . From the demand side, this means implementing a suite of measures that reduce internal loads as well as external heat gains . Once the demand load is reduced, improve systems efficiency. Finally, improve plant design. This is illustrated through the Best Practice strategies and Data Points in this guide. The supply side can then add value by provision of renewables, waste heat sources, and other measures that are beyond this guide’s scope .A whole-building system integration throughout the building’s design, construction, and operation can potentially assure high performance, both in terms of energy efficiency and comfort/service levels.This Lifecycle Performance Assurance Framework was conceptualized by Lawrence Berkeley National Laboratory, USA and the Center for Environmental Planning and Technology, India through U.S. and Indian stakeholder engagements during the U.S.-India Joint Center for Building Energy Research and Development proposal to the U.S. Department of Energy and Government of India, 2011. At each stage of the life cycle, it is critical to ensure integration between the buildings’ physical systems and the building information technologies. The building physical systems include Envelope, HVAC, Plugs, Lighting and Comfort technology systems . Whereas, building information technologies provide information on the design and functioning of the building physical systems. First, by performing building energy simulation and modeling at the design phase one can estimate the building’s energy performance and code compliance. This is especially relevant for certain energy conservation measures that may not be immediately attractive, but may become so through further analysis. Second, by building in controls and sensors for communications, one can track real-time performance at the building phase, relative to the original design intent. Third, by conducting monitoring-based commissioning and bench marking during operations, one can ascertain building performance, compare to peer buildings and provide feedback loops. Thus the use of building IT creates metrics at all three stages of the life cycle to help predict, commission, and measure the building performance and its systems and components. .To design and operate an energy-efficient building, focus on the energy performance based on modeled or monitored data, analyze what end uses are causing the largest consumption/waste, and apply a whole-building process to tackle the waste. For instance, peak demand in high-end commercial buildings is typically dominated by energy for air conditioning. However, for IT operations, the consumption pattern is different. In the latter, cooling and equipment plug loads are almost equally dominant loads. The equipment plug load is mostly comprised of uninterrupted power supply load from IT services and computers, and a smaller load is from raw power for elevators and miscellaneous equipment. Figure 8 shows typical energy consumption end-use pies — energy conservation measures need to specifically target these end uses. By doing so, one can tap into a huge potential for financial savings through strategic energy management. However, a utility bill does not provide enough information to mine this potential: metering and monitoring at an end-use level is necessary to understand and interpret the data at the necessary level of granularity. Energy represents 30% of operating expenses in a typical office building; this is the single largest and most manageable operating expense in offices. As a data point, in the United States, a 30% reduction in energy consumption can lower operating costs by $25,000 per year for every 5,000 square meters8 of office space. Another study of a national sample of US buildings has revealed that buildings with a “green rating” command on an average 3% higher rent and 16% higher selling price.

Using the equation-based modeling parading leads to multiple advantages

However, the various uses of water are managed through separate processes, and the impact of management objectives for one can result in sub-optimal practices for the other, and will be exacerbated with predictions of greater year-to-year climate variability. Without a coordinated analysis capability, the ability to predict the effectiveness of climate mitigation, adaptation measures, or setting the value of water and energy is severely limited. In this LDRD, we will develop a computation tool and analysis framework for linked climate-water-energy co-simulation. The LDRD’s resulting research will lay the foundation for an overall regional-scale integrated assessment capability. We will develop analysis tools and software to estimate the cost of consuming water to produce energy, and the cost of consuming energy to produce water at regional spatial scales, and decade and multi-decade temporal scales, develop analytical tools to specify the performance requirements of climate models for the aforementioned water-energy capability, develop uncertainty analysis algorithms to map the trade space between model unknowns , and demonstrate the resulting tools and software by analyzing the effects of climate uncertainty on water-energy management for the American River basin and Sacramento urban region of California. Direct chemical imaging of elemental content and impurities with extreme spatial and depth resolution and specificity is required to understand,stacking pots predict and minimize processes that adversely affect the macro-scale properties of solar and other energy systems. A fundamental lack of key analytical techniques capable of providing this information leaves a pressing need for the development of next-generation nanoscale chemical imaging tools.

The objective of this project is to develop a novel ultrafast laser spectroscopy technique based on a two near-field nanoprobe scheme which will overcome current limitations and meet the requirements of a versatile chemical imaging system for detecting and chemically mapping defects in solar energy systems and other energy materials. This project aims to develop a sensitive femto second laser chemical imaging system in which both material excitation and signal detection occurs in the optical near-field vicinity. This chemical imaging system will enable a fundamental understanding of the properties and functionality of new solar material systems at spatio-temporal scales that were previously unattainable. In the second year of the project, both ultraviolet and visible femtosecond laser pulses were coupled to the near-field excitation probe to obtain chemical signatures of different material systems including nanoparticles, crystalline, and amorphous materials. We demonstrated near field visible-range fluorescence originating from ultraviolet femtosecond laser excitation in the optical near-field. Second order diffraction was also observed in the same spectral range, enabling simultaneous femtosecond Rayleigh and femtosecond laser-induced fluorescence signal detection in the near field vicinity with the dual probe near-field system. We further optimized the near-field excitation and detection processes as a way to improve sensitivity and resolution, and compared the signals from near-field excitation/far-field detection to near-field excitation/near-field detection signals from the same material system . Significant improvements in the signal-to-noise ratio were observed in the near-field/near-field configuration, despite the significantly smaller size of excited surface area. Finally, the potential of generating surface plasmon polaritons from a “femtosecond-laser point source” was explored in the near-field/near-field configuration at a Au/glass interface, and the signal intensity was studied as a function of inter-probe distance using visible femtosecond laser irradiation.

These results underline the importance of detecting near-field signals in the near-field vicinity as a way to achieve high sensitivity, high resolution chemical imaging at small spatio-temporal scales. The purpose of this research is to build and apply to test problems a computational platform for the design, retrofit and operation of urban energy grids that include electrical systems, district heating and cooling systems, and centralized and distributed energy storage. The need for this research arises because an integration of renewable energy beyond 30% poses dynamic challenges on the generation, storage and transmission of energy that are not well understood. Such a platform is also needed to assess economic benefits for the integration of co-generation plants that generate combined heating, cooling and power at the district level in order to decrease the carbon footprint of energy generation. To address this need, this project will create a flexible computational R&D platform that allows expanding energy and policy analysis from buildings to district energy systems. Questions that this platform enables to address include where to place energy generation and storage, how to set the price structure, how to trade-off incentives for energy-efficiency versus incentives to add generation or storage capacity at buildings, how to integrate waste heat utilization to reduce the carbon footprint of district energy systems and how to upgrade the electricity grid to integrate an increasing fraction of renewable energy while ensuring grid reliability and power quality. Significant accomplishments have been made in the development of multi-physics models that describe the interaction between buildings and the electrical grid. Regarding multi-physics modeling, we completed the development of more than fifty models for analyzing buildings-to-electrical grid integration. The models are now part of the Modelica Buildings library, an equation-based object-oriented library for modeling of dynamic building energy systems.

The models can represent DC and AC systems under different assumptions such as quasi-stationary or dynamic-phasorial representation. The electrical models can be connected to thermal models of buildings in order to evaluate the impact of electrical and thermal storages, as well as of building controls, on the distribution grid. The models have been validated against standard IEEE procedures defined for testing the correctness of electrical network simulation software. The models, the results of the validation and few examples showing the ability to perform building-to-grid simulation studies were presented at the 2014 BauSIM conference in Aachen . The paper, titled “A Modelica package for building-to-electrical grid integration” won the best paper award.It allows to graphically connect components of cyber-physical systems that advance in time based on continuous time dynamics, discrete time dynamics, or event-driven dynamics, in order to study building-to-grid integration. These languages also allow accessing the mathematical structure of the entire model. Such information has been used for co-simulation and for solving optimal control problems. For example, we demonstrated how simulation models can be reused to solve optimal control problems by means of computer algebra and numerical methods. The problem investigated was to determine the optimal charge profile of a battery in a small district with multiple buildings and photovoltaic systems that minimizes energy subject to voltage constraints. The increasing availability of complete genomic sequences and whole-genome analysis tools has moved the construction of industrial hosts towards rational design by metabolic engineering and systems biology. The current genetic manipulation tool kits available for industrial hosts, however,grow lights are desperately sparse and unpolished in comparison to the array of tools available for E. coli. The goal of this project is to develop a high throughput genome editing tool to facilitate the engineering of novel applications not only in E. coli, but in under exploited industrial producers such as Streptomyces coelicolor and Corynebacterium glutamicum. The original goal of this proposal was to create a secure industrial bacterium by converting all 484 TGA termination codons to TAA in the C. glutamicum genome and then reassigning TGA to encode an unnatural amino acid. In our phase I work, we discovered that the recombineering approach alone could not achieve the frequency of allelic replacement needed to complete codon depletion in a reasonable time frame. We concluded that a more efficient genome editing tool would be needed for this project. Recent work on the Clustered Regularly Inter spaced Short Palindromic Repeat adaptive immune system of prokaryotes has led to the identification of a DNA endonuclease called Cas9 whose target sequence specificity is programmed by small spacer RNAs in the CRISPR loci. By editing spacer sequences we can direct Cas9 to cut endogenous DNA targets, thereby forcing cells to repair themselves in a predictably mutagenic manner. Such Cas9 mediated cleavage in vivo is more efficient, effective, and potentially multi-plexable than any other tools available for genomic engineering. Our most significant accomplishment has been to develop a reproducible and efficient protocol for engineering E. coli DNA in vivo. Our method uses the Streptococcus pyogenes CRISPR-Cas9 system in combination with λRed recombineering proteins in E. coli. We have created a mobile plasmid with both Cas9 and λRed activities and used it successfully in performing genome editing in all E. coli strains in hand. This protocol has been successfully used to modify gene loci in living E. coli cells within a 3 weeks time frame. The developed Cas9 toolkit and protocol have already been used in several bio-energy research projects.

We have also received requests and started disseminating the toolkit and protocol to general scientific community. We have also succeeded in developing informatics tools to aid in the design of CRISPR spacer constructs given a targeted range of genomic sequences. This tool would be handy in the design of Cas9 genome editing at scale. As we had predicted, our approach provides a significantly faster turnaround time to modify genetic codes than any available tools. We are hopeful that this method will be generally applicable to non-E. coli hosts, which will greatly aid our future goal of modifying genetic codes of industrial microbes. The purpose of this project is to develop sensitive and selective biosensors for a diverse set of target chemicals as a way to provide a high-throughput functional screening method for molecule production in microbial cells. Advances in DNA synthesis and combinatorial DNA assembly allow for the construction of thousands of pathway variants by varying both the gene content as well as the expression levels of the pathway components, a technique commonly referred to as pathway refactoring. However, a lack of sufficiently sensitive, selective, and scalable technologies to measure chemical production presents a major bottleneck that limits our ability to fully exploit large-scale synthesis efforts. We will develop and deploy novel biosensors systems based on both protein and RNA molecules that have been previously shown to respond to the presence of small molecule ligands. In the case of protein-based sensors, we will use synthetic biology approaches to modify the ligand specificity of a known transcription factor . We will screen for ligand-dependent TF function by placing TF binding sites in front of GFP, such that GFP activation should only be observed in the presence of a ligand. We will test the affinity and response of the TF mutant library to a variety of relevant ligands by using several rounds of selection using fluorescence activated cell-sorting . Samples collected after each round of selection will be sequenced using next-generation sequencing methods and we will seek to understand the relationship between TF ligand affinity and sequence evolution, as this will facilitate more rational engineering approaches. In the case of the nucleotide sensor, we will develop a system in which cell survival is linked to ligand production by coupling the switch to a chemical selection system used during cell growth. We will then deploy this system to screen a library of 20K pathway variants to select and further characterize high molecule producing E.coli strains. Selected strains will be sequenced and we will use modeling approaches to identify the key variables and bottlenecks associated to molecule production. Over the course of this LDRD funding, we have successfully developed proof of principle for an end-to-end system to screen for gene regulatory sequences in an unbiased manner. This work has been published in Nature Methods, and an additional small project resulting from this work has been reported in Biology Open. Briefly, we have shown that we can clone hundreds to thousands of random sequences into a precise location in the mouse genome that is linked to a reporter gene, which is activated when sequences are behaving as enhancers. The targeted cells can be flow sorted to isolate those cells that are actively expressing the reporter gene, and the sequences responsible for this reporter expression can be identified through DNA sequencing. To date, we have used this method to test the embryonic stem cell enhancer activity of more than 0.5Megabases of mouse or human genomic sequence in 1kilobase increments. To apply this method to a broader range of cell types, a major aim of this proposal, we have coupled the ES cell reporter assays we developed with in vitro differentiation and showed that we can accurately identify enhancers active in cardiac and neuronal cell populations.

Understanding and controlling these processes remains a fundamental science challenge

Deposition of multi-layer coatings on sawtooth substrate will allow a new kind of x-ray gratings, Multilayer-coated Blazed gratings which will be a basis for a new generation of high resolution and high throughput x-ray instrumentation. The flow of energy and electric charge in molecules are central to both natural and synthetic molecular systems that convert sunlight into fuels and evolve over a multitude of timescales.We address this challenge by probing chemically complex systems in the gas phase by combining the precise time information of ultrafast spectroscopy techniques with the chemical sensitivity characteristic of synchrotron radiation. An ultrafast pulse pair with VUV or soft x-ray photons from the synchrotron are used to make measurements with atomic-site specificity. With access to photons spanning the range of terahertz to hard X-rays that is provided by a synchrotron, coupled with the rich spectroscopy available in the UV-VIS-IR region provided by table top ultrafast lasers, a multi-dimensional tool to probe dynamics is enabled.We have developed a portable transient absorption experimental apparatus to perform time resolved analysis of two color laser excitation schemes applicable to a variety of gaseous systems. This setup is currently deployed at the soft x-ray Beamline 6.0.2 at the Advanced Light Source,ebb flow tray where we are interrogating the excited state spectroscopy and dynamics of nitrophenols: Here one ultrafast pulse excites onitrophenol while a second ultrafast infrared pulse promotes the system to nearby vibronic state after a suitable time delay. The transmitted IR light is detected by a photodiode and a high-sensitivity photon spectrometer to determine the absorption as a function of IR wavelength and time delay.

These experiments will be performed in parallel with laser-synchrotron experiments as a complementary diagnostic tool, allowing for the precise control of the electronic states in model chromophores that is crucial towards developing ultrafast laser-synchrotron multicolor spectroscopy. We have measured ion momentum images of o-nitrophenol following photoexcitation and photoionization from its electronic ground state by soft x-rays tuned near the core-level resonances of oxygen and nitrogen, at Beamline 6.0.2 at the Advanced Light Source. Ultraviolet pulses, produced from the 3rd harmonic of a Ti:sapphire laser system that is synchronized to the ALS storage ring and the 4kHz repetition rate of the soft x-ray Beamline, were also employed in these experiments in an effort to measure the products of laser photo dissociation by core-level ion momentum spectroscopy. Our subsequent improvements to the reliability of the laser systems have increased the laser pulse energies from a few hundreds of nanojoules to above 10 microjoules for each of the UV and IR laser beams that will be used for the 3 color experiments. With all the hardware and staff in place experiments are underway to probe the dynamics of evolving excited states in gas phase systems.In parallel, we have developed a dual catalyst system to homologate alpha-olefins to tertiary amines by sequential hydroformylation and reductive amination . Hydroformylation occurs in the organic phase of the reaction medium and is catalyzed by the combination of Rh2 and BISBI, a ligand developed by Eastman Kodak for hydroformylation with high selectivity for linear aldehydes. The aldehyde intermediate condenses with secondary amine reagents to form an iminium ion, which reacts with a metal hydride to afford the tertiary amine product. Reductive amination occurs in the aqueous phase of the reaction medium and is catalyzed by the combination of Cp*Ir3 and a water-soluble diphosphine ligand. Finally, we have prepared artificial enzymes by two methods, In the first, we prepared noble metal-porphyrin active sites in myoglobin. Based on prior reconstitution of myoglobin with both abiotic protoporphyrins and [M]-salen complexes, we incorporated new Ir, Rh, Co, and Ru-based cofactors into myoglobin mutants in which the axial ligand and secondary coordination sphere are varied.

In the past year, we developed a new, highly efficient method for the generation of artificially metallated myoglobins based on the direct expression and purification of apo-myoglobins. Using these new myoglobin-based catalysts, we have shown for the first time that an artificially-metallated PPIX-binding protein can catalyze organic reactions that cannot be catalyzed by the same protein binding its native Fe-PPIX cofactor. In particular, Ir-PPIX-myo catalyzes cyclopropanation of internal olefins and carbene insertion into C-H bonds, while Co-salen-myo catalyzes intramolecular hydroamination of unbiased substrates. In the second approach, we developed artificial metalloenzymes for transformations for which there are no known metal catalysts. We are doing so by a bottom-up approach in which we identify by high throughput screening of unrestricted metal-ligand combinations a model reaction using reagents and conditions compatible with proteins. We then conjugate this catalytic site into a protein hosts, using covalent or non-covalent interactions; the catalytic properties of the conjuagates are then be evaluated, and the activity of the enzyme fined-tuned by modification of the ligand used. Following this proposed methodology, identified a metalloenzyme for regioselective halogenation of aromatic substrates. A Cobalt cofactor covalently bound to nitrobindin catalyzes the halogenation of a simple, water-soluble arene. There are two synergistic purposes to this project. The first objective is to improve our ability to understand the physical factors that are responsible for intermolecular interactions. Electronic structure calculations are nowadays capable of calculating intermolecular interactions nearly as accurately as they can be measured. However such calculations by themselves do not provide any understanding of why the interactions have the magnitudes that they do. Methods for this purpose are called energy decomposition analyses . It is an important open challenge to design improved EDA’s, a problem that is best attacked by deepening our understanding of the factors controlling intermolecular interactions. The second objective of the project is to develop new, more efficient numerical methods for solving the equations of electronic structure theory for molecular clusters .

There should be natural connections between new EDA tools, and the problem of computing those interactions more efficiently than has been hitherto possible. We believe the combination of improved EDA’s for analysis together with lower scaling algorithms for calculating the interactions will be a potentially significant step forwards in quantum chemistry. The electron-electron correlation energy is negative, and attractive dispersion interactions are entirely a correlation effect, so the contribution of correlation to intermolecular binding is commonly assumed to be negative, or binding in nature. However, we have discovered that there are many cases where the long-range correlation binding energy is positive, and therefore anti binding, with certain geometries of the water dimer as a prominent example. We have also undercover the origin of this effect, which is the systematic overestimation of dipole moments by mean-field theory, leading to reduced electrostatic attraction upon inclusion of correlation. Thus, EDA’s that include correlation but do not correct mean field electrostatics are sub-optimal, especially those that describe all of the correlation energy as dispersion. This result has major implications for the correct design of new EDA’s, which we look forward to taking up in future post-LDRD work. Our second major activity has been exploring new ways of using the natural separation of energy scales between intra-molecular and intermolecular interactions to improve the efficiency of electronic structure theory calculations. Specifically,flood and drain tray we have explored whether coupled cluster calculations can be accurate approximated by a starting point where the CC calculation is performed on only the intra-molecular excitations or intra-molecular + dispersive intermolecular excitations . The remaining contributions are then evaluated approximately by perturbation theory . The question is whether this approach can improve the often-questionable accuracy of PT, without the prohibitive computational cost of a full CC calculation on a molecular cluster. Our results indicate that PT based on the linear model does not significantly improve upon direct use of PT, while the quadratic model does yield significant gains in accuracy. Work is presently underway to explore whether this result can be improved by using orbitals relaxed in the cluster environment, and how to obtain such orbitals more efficiently than brute force solution as if the cluster is a supermolecule.The purpose of this project is to develop a powerful theoretical framework capable of discovering general design rules based on nanoscale properties of molecule shape and size, charge distributions, ionic strength, and concentration to influence the mechanism, percolation, morphology, and rates of assembly over mesoscale time and lengthscales. The ability to control for structure and dynamics of the assembly process is a fundamental problem that, if solved, will broadly impact basic energy science efforts in nanoscale patterning over mesoscale assemblies of block copolymer materials, polyelectrolyte organization at solid or liquid interfaces, forces governing multi-phasic soft colloids, and growth of quantum dots in polydisperse colloidal medium. Fundamental design rules applied to complex and heterogeneous materials are important to DOE mission science that will enable next generation fuel cells, photovoltaics, and light emitting device technologies. At present our ability to design and control complex catalytic activity by coupling simpler modular systems into a network that executes novel reactive outcomes is an unsolved problem. And yet, highly complex catalytic processes in nature are organized as networks of proteins or nucleic acids that optimize spatial proximity, feedback loops, and dynamical congruence of reaction events to optimize and fine tune targeted biochemical functions.

The primary intellectual activity of bio-mimetic scaffolding – the design of spatial organizations of modular bio-catalysts – is to restore their catalytic power in these new chemical organizations after they have lost their catalytic functions optimized in a separate biological context. That is our goal. Some inspiration for our approach to catalytic network design is derived from another highly successful bio-mimetic approach- laboratory directed evolution – an experimental strategy based on the principle of natural selection. The goal is to alter the protein through multiple rounds of mutagenesis and selection to isolate the few new sequences that exhibit enhanced catalytic performance, selectivity, or protein stability, or to develop new functional properties not found in nature in the creation of new bio-catalysts. Given the limitations of our understanding of the structure-function relationship, LDE provides an attractive alternative to rational design approaches and is highly flexible in application to different bio-catalysis reactions. However, there are still outstanding problems when transferring LDE into new optimization strategies for new bio-catalysts. First the finite size and composition of the LDE libraries may be limiting for the optimization of enzymes that act on, for example, solid substrates, and there has been little effort devoted to developing LDE libraries for optimizing bio-catalytic activity in the context of chemical networks. Furthermore, although often highly successful, LDE is an opaque process because it offers no rationale as to why the mutations were successful, and therefore stands outside our ability to systematically reach novel catalysis outcomes. This proposal is a theoretical study to offer new rational design strategies for building an artificial chemical network of bio-catalytic reactions that execute complex but now non-biological catalytic functions using computational directed evolution . Traditionally enzyme optimization is often focused on the energetics of active site organization but there is correspondingly little effort directed toward optimizing entropic or dynamical effects that are also equally relevant for improvements in catalytic activity. Therefore we propose a new CDE design strategy that considers not only energetics but novel physical and theoretical concepts Recent studies report evidence that some organic aerosols might exist in the atmosphere not as well mixed liquids – the traditional description, and their general state when they are formed – but rather as highly viscous, glassy materials with extremely slow internal reaction-diffusion times and low evaporation rates. These observations suggest that the characteristics of organic aerosols currently used in regional and global climate models are fundamentally incorrect: viscosity affects reactivity and indeed, the models consistently under-predict the quantity of aerosol in the atmosphere by factors of 5 to 10. We are addressing this gap by developing a quantitative and predictive description of how initially liquid aerosols are transformed into glassy ones, in particular by gas phase oxidizers. Reaction-diffusion models that are chemically accurate and fully validated by experimental data have not been previously used in this field, and hold promise for improving parameters for atmospheric models. Model simulations are performed using stochastic methods, which are well-suited to large dynamic ranges of conditions, and capture fluctuations and rare events key to liquid-solid transitions.