Most blueberry cultivars are highly to moderately susceptible to AFR

Although the inclusion criteria were defined inclusively, so that patients with individual clinical criteria of rapid disease progression could have been included, all patients fulfilled the Mayo classification criterion as primary reason for inclusion . Short-term KDIs did not show an acute effect on TKV in either arm. Indeed, dietary interventions in small animals are expected to result in earlier responses than in humans. Interestingly, one patient in the KD group showed a TKV reduction by 8.4% after KD with a return to baseline at the final study visit. This patient was consistently ketogenic throughout the KD and reached peak values for the KD group in acetone measurements. A recent study on intermittent fasting and caloric restriction in obese ADPKD patients showed positive effects with reduction of body weight and reduction of adipose lipid stores correlating with slowed kidney growth. However, the study did not measure ketone bodies and considering the type of intervention efficient ketosis is not expected. CR without limiting CHO intake hardly induces ketosis and in intermittent fasting a single daily CHO-containing meal interrupts ketogenesis. In studies examining non-ADPKD patients, similar dietary regimens only intermittently lead to very low levels of ketosis. Nevertheless,strawberry gutter system it is indeed possible that low ketone body levels potentially reached may have contributed to their findings. Whether longer-lasting KDIs have beneficial effects on TKV should be further investigated in larger studies.

A randomized controlled clinical trial on this topic is currently ongoing and another study has been announced. TLV measurements showed a significant decrease in 8/10 patients. However, after returning to a CHO-rich diet, there was a clear-cut, prompt rebound. Glucose restriction as in ketosis results in a depletion of liver glycogen stores. Considering that we only found significant changes in the non-cystic liver parenchyma this is likely to be the reason for the reversible TLV changes. In non-ADPKD patients, reductions in liver volume due to low-calorie diets have been widely described and are commonly exploited in bariatric surgery. However, a final conclusion on this topic and the effects of KDIs on cystic and noncystic liver parenchyma in ADPKD will require analyses of larger cohorts and longer-term intervention in patients with severe polycystic liver disease , also considering the fact that our study included a high proportion of patients with a very low liver cyst fraction. Consequently, it is worth further investment in this regard taking into account the complete lack of efficient therapeutic options for PLD. Rebounds of TLV have also been described after discontinuation of disease modifying drug therapy with somatostatin analogues in PLD. Hunger occurred significantly more often in the WF group, while an increased feeling of fullness was occasionally reported in the KD group. Two patients reported self-limited palpitations. Ketogenic diets can lead to prolonged QT time with an increased risk of cardiac arrhythmias. Under ketosis, regular electrocardiogram checks should be considered for patients at risk. Apart from this, no safety-relevant physical complaints, in particular no gout attacks, no kidney stones and no hypoglycemia, were observed.

We observed a statistically significant increase in total cholesterol and LDL-C in the KD group and an almost statistically significant increase of LDLC in the WF group. It is known that KDs and WF can lead to at least transient increases in LDL-C and total cholesterol, most likely through depletion of adipose lipid stores and— for KD—additionally increased intake of fatty acids. While cholesterol levels normalize after cessation of fasting, ketogenic diets have historically shown inconsistent effects on cholesterol and LDL-C levels. However, potential increases in total cholesterol and LDL-C may normalize on longer-term ketogenic diets. KDs are known to have several beneficial effects on cardiovascular disease risk, such as improvements in body weight, insulin resistance, blood pressure, HbA1c levels or inflammatory markers. Besides, the increase in LDL-C is mainly due to large LDL particles, not the more atherogenic small dense LDL particles. However, elevated LDL-C levels are a clearly defined cardiovascular risk factor in clinical practice, regardless of their sub-typing, and chronic kidney disease is a state of increased cardiovascular risk in general. Consequently, prospective long-term studies are needed to draw a definitive conclusion on the effects of a prolonged ketogenic diet on cardiovascular risk in ADPKD patients. Furthermore, there was a significant increase in uric acid resulting in a hyperuricemia in both groups after the KDIs. Increases in uric acid under ketogenic metabolism and fasting have already been described multiple times. One of the reasons for hyperuricemia is competition between BHB and uric acid for the same kidney transport sites. Uric acid levels returned to baseline after resumption of a CHO-rich diet in both groups, while no gout attacks or kidney stones were observed. Whether the increase in uric acid is clinically meaningful will require larger longer term trials. Patients at risk should be monitored during KDIs and appropriate measures, e.g. prescription of citrate, may be considered. We also detected a significant increase in serum bilirubin levels in our WF group .

Such increases upon fasting are known and considered not to be clinically relevant. The KDIs led to a significant weight loss and a reduction in body fat that persisted even after returning to a CHOrich diet. Beneficial effects of KDIs, such as improved body weight and anti-inflammatory effects, have been discussed to outweigh the possible adverse effects on CVD risk associated with cholesterol increases and have a protective role in NAFLD. Regarding ADPKD, a recent study indicated that weight reduction in overweight patients may slow the rate of kidney growth compared with historical data and obesity has been shown to be associated with disease progression. Previously reported effects of KDIs on blood pressure were not observed in our trial which may be a consequence of the short period. However, blood pressure medication had to be stopped in one patient due to a symptomatic blood pressure decrease upon starting KD. In total, 80% of all patients reached the combined feasibility and metabolic endpoint. This is in line with recent studies indicating good feasibility of KDIs in ADPKD patients. Besides, Hopp et al. recently reported good feasibility in their 1-year weight-loss trial in ADPKD patients. Some side effects of KDs, like the “keto flu,” occur mainly in the beginning. Therefore, the feasibility of KDs might be even better with longer intake and adaptation to the diet. Taken together there appears to be general good acceptance of dietary interventions among ADPKD patients.This study has several limitations: most importantly, the small number of participants needs to be considered when interpreting the results of statistical testing. Second, there was a gender imbalance, with 80% of participants being male, which limits the comparability of our data. The KDIs were of short duration. Whether longer-term KDIs thus have a more significant effect, e.g. on TKV, remains unclear. The BHB and acetone cutoffs were based on a limited amount of data. The aim of the present study was to investigate dietary interventions that can be accessible for a wide community which would not be possible if aiming for deep ketosis and stay clearly in the ketogenic range. Most investigators agree that normal BHB values are between 0.1 and 0.5 mmol/L. Since we were not aiming for deep ketosis, we therefore defined the ketosis range from a BHB value of 0.8 mmol/L, which is significantly above these values and should roughly correspond to an acetone level of 10 p.p.m. . Also, the manufacturer recommendations indicate a target ketosis range between10 and 40 p.p.m. with the cutoff to ketosis indicated as low as 5 p.p.m. acetone in breath. In conclusion,grow strawberry in containers in our proof-of-principle trial, short-term KDIs in ADPKD are safe, feasible and triggered ketosis effectively but did not show an acute impact on TKV. Larger studies are required to further investigate the potential beneficial effects of KDIs in ADPKD.Anthracnose fruit rot , caused by the fungal pathogen Colletotrichum fioriniae Marcelino & Gouli , is among the most destructive and widespread fruit disease of blueberries. The infection of C. fioriniae impacts fruit quality and can result in a complete loss of post-harvest yield. Colletotrichum species have been reported to infect numerous other high-valued fruit crops, including apple, citrus, and strawberry. Infections occur as early as fruit set but remain latent until the fruit ripens, complicating the disease’s detection and protection. Initially, sunken areas develop on the fruit surface, followed by the exudation of salmon-colored spore masses . Fungicides remain the primary method to mitigate AFR infection in cultivated blueberry. However, they are often expensive and not a favorable option for growers. Moreover, some of these fungicides are suspected carcinogens, whereas others are prone to fungicide resistance development. Often, fungicide sprays are more frequently used than necessary because of the difficulty in optimizing spray timing due to the long latency period and variable weather conditions inf luencing the pathogen life cycle. Therefore, the development of AFR-resistant cultivars is highly desired by the blueberry industry. Several highly resistant cultivars have been identified, including northern high bush Vaccinium corymbosum L. ‘Draper’, which display strong resistance in the field and in laboratory inoculation studies.

The genome of ‘Draper’ was previously sequenced for three primary reasons: it is a commonly utilized parent in breeding programs, it is widely cultivated worldwide as an early mid-season ripening variety, and it is highly resistant to AFR. However, to our knowledge, no cultivars exhibit complete resistance. In these studies, C. fioriniae had differential infection strategies and infection rates in resistant versus susceptible cultivars. Furthermore, Miles and Hancock recently reported that resistance to AFR infection is highly heritable and argue that there are likely only a few loci involved in resistance. However, the underlying genetic mechanism of resistance to AFR remains poorly understood in blueberry and other fruit crops. Blueberry fruits contain a high concentration of many phytochemicals, including compounds with known anti-fungal properties. One potential component of resistance to AFR could involve specialized metabolites. For example, quercetin 3-O-rhamnoside is a f lavonol glycoside synthesized from the amino acid precursor L-phenylalanine via the phenylpropanoid pathway whose antimicrobial activity has been demonstrated against C. fioriniae, Pseudomonas maltophilia, and Enterobacter cloacae. In fact, treating susceptible blueberry fruits with a 4% solution of extract from resistant fruit containing quercetin3-O-rhamnoside, among other anthocyanins and non-anthocyanin f lavonoids, decreased C. fioriniae infection by 88%. Quercetin and its glycosides have been studied in other systems, but the dynamics of these compounds remain poorly understood in blueberry. Quercetin glycosides may be deglycosylated, leaving the bio-active core, quercetin. Structural analysis of plantderived f lavonoids revealed that quercetin contains numerous structural components important in bioactivity against certain pathogens, including methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, and Burkholderia cepacia. Furthermore, quercetin may be oxidized to form quinones, antifungal compounds previously shown to be effective against certain Colletotrichum species. However, previous studies have also proposed that AFR resistance in ripe blueberries may be due to an interaction between simple phenolic compounds and organic acids and not necessarily individual fungitoxic compounds. Here, we used a genetic mapping approach to identify genomic loci associated with resistance to AFR infection in northern high bush blueberry. We generated an RNAseq dataset to identify which genes are differentially expressed during infection in ‘Draper’ mature fruits. Finally, we performed metabolite profiling in mature fruits and identified a metabolite with properties consistent with a quercetin rhamnoside whose abundance is positively correlated with AFR resistance.AFR is a top disease priority for the blueberry industry, as it can result in up to 100% post-harvest yield loss. Thus, growers have largely relied on fungicides to maximize yields. Both the infection and resistance mechanisms of AFR are highly variable among and within crops . Resistance may arise from passive mechanisms such as physiological fruit characteristics and pre-existing compounds with anti-fungal properties. Immature fruits often exhibit many features that lend themselves to resistance to anthracnose, such as firmness, pH, and antimicrobial compounds . However, these resistance factors tend to dampen as fruit matures. Further, the accumulation of soluble sugars in conjunction with ascorbic acid was previously associated with anthracnose resistance in guava. Work in blueberry indicates a connection between sugar content and anthracnose resistance, but some moderately susceptible cultivars have high sugar concentrations. This suggests that sugar content may be only one piece of a multi-factor resistance mechanism. Additionally, the abundance of certain fruit volatiles, including -Hex-2-enal, has been linked to fruit rot resistance in strawberry. While some of these volatiles are also found in blueberry, their presence and quantity are not correlated with resistance.

Eight quadrats at each plot were utilized to record under story plants and tree seedling densities

A global overview of climate induced forest mortality provides a detailed assessment of events driven by climatic water/heat stress since 1970; few of these documented die back events provide opportunity to examine vegetation changes that occur over a longer time frame. Yellow-cedar , a species distributed from the northern Klamath Mountains of California to Prince William Sound in Alaska, has been dying in southeast Alaska since the late 1800s with intensifying rates observed in the 1970s and 1980s . Recent research reveals a complex ‘‘tree injury pathway’’ where climate change plays a key role in a web of interactions leading to widespread yellow-cedar mortality, referred to as yellow-cedar decline . Prominent factors in this injury pathway include cold tolerance of roots, timing of dehardening, and regional trends of reduced snowp ack at low elevations . Early springtime thaws trigger dehardening and reduce snow cover that insulates soil and shallow fine roots from periodic extreme cold events; this can lead to injury of yellow-cedar roots to initiate tree mortality, which is predominantly limited to lower elevations . Despite the extent of research on the mechanisms of decline, over story and under story dynamics in declining stands are not well understood . The direct loss of yellow-cedar has important ecological, economic, and cultural implications; however, other changes are also relevant in these forests that emerge in response to decline. Researchers are just beginning to understand the influence of dead cedars on watershed nutrient export . Economically and culturally,grow bucket yellow-cedar trees are important because they provide valuable products for Alaska Native communities and the forest industry . These coastal forests also provide forage for the Sitka black-tailed deer , an important game animal throughout the region.

Since the 1980s, much forest-related research in southeast Alaska has addressed the implications of various active forest management regimes on habitat of this commonly hunted species and biodiversity ; aspects of this research centered on old growth habitat and the effects of land use practices, such as clear cutting or partial cutting on forage . To date, researchers have not addressed the effects of yellow-cedar decline on the availability of key forage species. Death of yellow-cedar and the shifts in plant community dynamics in forests affected by decline can have cascading effects on the human-natural system by affecting the ecosystem services these forests provide . We studied the process of forest development using a chronosequence to compare forests unaffected by widespread mortality with those affected at different time points over approximately one century. Considering size classes from seedlings to large trees across the chronosequence, our analysis of the conifer species populations at various life history stages, including death, documented changes occurring in forests affected by decline, and extended a view of forest composition and structure into the future. We hypothesized that: western hemlock and other conifers increase in importance as the contribution of yellow-cedar to the conifer community structure is reduced over time, seedling and sapling regeneration increases as yellow-cedars die and the canopy opens, community composition of understory plants changes over time such that shrubs increase in abundance, and the volume of key forage species for the Sitka black-tailed deer increases in forests affected by decline. Our study illustrates the long-term consequences for many plant species when a single tree species suffers from climate-induced mortality.

Modern climate in the southeast region of Alaska is mild and hypermaritime with year round precipitation, absence of prolonged dry periods, and comprised of comparatively mild season conditions than continental climates at similar latitudes . Mean annual rainfall measured in Sitka and Gustavus, the two closest towns to the remote, outer coast study area, measure 2200 and 1700 mm, respectively. The high rainfall that occurs throughout the Alexander Archipelago, combined with its unique island geography, geologic history, and absence of fires maintain some of the most expansive old-growth forests found in North America. Five common conifer species occur on the northern range of the Archipelago: western hemlock , mountain hemlock , yellow-cedar, Sitka spruce , and shore pine . These coastal forests are simple in composition yet often complex in age and tree structure . Yellow-cedar occurs across a soil-drainage gradient from poorly drained bogs to well-drained soils on steeper slopes that often support more productive stands . This study occurs in the northern portion of the yellow-cedar population distribution and at the current latitudinal limits of forests affected by decline. We centered our investigation on protected lands in four inlets in the Alexander Archipelago on the outer coast of the West Chichagof-Yakobi Wilderness on Chichagof Island in the Tongass National Forest and Glacier Bay National Park and Preserve . Aerial surveys were conducted in 2010 and 2011 to assess the presence of affected forests and to identify the edge of yellow-cedar die back that occurs south of GLBA on Chichagof Island. Aside from a brief history of small-scale gold mining that occurred in several areas on Chichagof Island between 1906 and 1942, there is little evidence of human impact on these lands, making them ideal for studying ecological dynamics.

Drawing upon previous studies that estimated the time-since-death for five classes of standing dead yellow-cedar trees at various stages of deterioration , our plot selection consisted of sequential steps, in the field, to sample forests representative from a range of time-since-death. Not all yellow-cedar trees in a forest affected by mortality die at once; mortality is progressive in forests experiencing die back . Highly resistant to decay, these trees remain standing for up to a century after their death . As a result, they offer the opportunity to date disturbance, approximately, and to create a long-term chronosequence. First, we stratified the study area coastline into visually distinguishable categories of ‘‘cedar decline status’’ by conducting boat surveys and assessing cedar decline status across 121.1 km of coastline in June 2011 and 2012. We traveled the coastline and made visual observations of live and dead yellow-cedar trees and their snag classes. We assigned cedar decline status to coastal forests at 100 m increments using a GPS Garmin 60 CSx . Next, using the ArcGIS 10.2 Geographic Information System software , we randomly generated plot locations in forests categorized during the coastline survey as follows: live, unaffected by mortality; recent mortality; mid-range mortality; and old mortality. Lastly, we controlled for basal area and key biophysical factors, including elevation and aspect via methods described. Plots were restricted to elevations less than 150 m, excluding northeast facing plots, to sample from low-elevation plots representative of conditions where yellow-cedar decline commonly occurs at this latitude . Plots were randomly located between 0.1 and 0.5 km of the mean high tide to avoid sampling within the beach fringe area, and on slopes ,72% to limit risk of mass movement . We excluded plots with a totalbasal area ,35 m2 /ha to avoid sampling below the optimal niche of yellowcedar . This control was performed in the field by point sampling to estimate basal area using a prism with a basal area factor 2.5 . Plots dominated by the presence of a creek bed or other biophysical disturbance were eliminated from plot selection,dutch bucket for tomatoes due to the confounding influence of disturbance on the number of trees standing and species abundance. A minimum distance of 300 m was maintained between all plot centers. By restricting our sampling to these controls, our study was designed to examine the process of forest development post-decline in low-elevation coastal forests with plot conditions typical for yellow-cedar mortality excluding bog wetlands, where yellow-cedar may co-occur sparsely with shore pine. After controlling for biophysical factors, 20 plots were sampled in live forests and 10 plots in each of the affected cedar status categories for a total of 50 plots across the study area .Data were collected in fixed, circular nested plots to capture a wide range of tree diameters and in quadrats within each plot to account for spatial variability in under story vegetation. Forty plots were established and measured during the 2011 field season and 10 plots during the 2012 field season, through the seasonal window of mid-June to mid-August. Nested circular plots were used to sample trees and saplings as follows: a 10.3 m fixed radius plot for trees 25.0 cm diameter at breast height , a 6.0 m fixed radius plot for saplings ,2.5 cm dbh and 1 m height, and trees 2.5–24.9 cm dbh. We counted live saplings of each species to analyze the population dynamics for individuals that survive to this size class. For each tree, we recorded species, dbh to the nearest 0.1 cm, height to the nearest 0.01 m, dead or live, and for dead trees snag classes I–V.

To provide an additional long term view of species changes, we recorded counts for smaller conifer seedlings , identifying western hemlock and mountain hemlock to genus, and other conifers to species. We noted presence/ absence of each conifer species 10–99 cm, but did not sample this size class for individual counts. We recorded maximum height and percentage cover of each plant species observed according to the Daubenmire method on a continuous scale . In unique cases where consistent identification to species was difficult Salisb.; Vaccinium ovalifolium Sm., and V. alaskaense Howell, we combined observations but noted both species presence for total richness across the study area. Blueberries, V. ovalifolium and V. alaskaense, are similar in appearance and often synonymized . Mosses and liverworts were recorded together as bryophytes within the quadrat. Sedges were recorded together but distinguished from true grasses .The changes observed across the chronosequence provide strong evidence that this species die back associated with climate change can result in a temporally dynamic forest community distinguished by the diminished importance of yellow-cedar, an increase in graminoid abundance in the early stages of stand development, and a significant increase in shrub abundance and volume over time. Tree mortality timing and intensity, as characterized by our stratified sampling of cedar decline status, played an important role in determining the under story community composition and over story processes of stand re-initiation and development. Our results highlight the ways in which widespread mortality of one species can create opportunities for other species and underscores the importance of considering long-term temporal variation when evaluating the effects of a species die back associated with climate change. Methods for predicting future changes in species distributions, such as the climate envelope approach, rely upon statistical correlations between existing species distributions and environmental variables to define a species’ tolerance; however, a number of critiques point to many factors other than climate that play an important role in predicting the dynamics of species’ distributions . Given the different ecological traits among species, climate change will probably not cause entire plant communities to shift en masse to favorable habitat . Although rapid climatic change or extreme climatic events can alter community composition , a more likely scenario is that new assemblages will appear . As vulnerable species drop out of existing ecosystems, resident species will become more competitive and new species may arrive through migrations . Individual species traits may also help explain the process of forest development in forests affected by widespread mortality, as the most abundant species may be those with traits that make them well-adapted to changing biotic and abiotic conditions . We were unable to evaluate the independent effect of soil saturation on canopy openness, but the fact that canopy openness was a significant predictor of shore pine and mountain hemlock sapling occurrence suggests the important roles of soil conditions and light in determining which species are more likely to regenerate. Both species are known to have preferences for wet soils and scrubby open forests , and canopy openness in forests affected by decline has two driving components: soil saturation and crown deterioration caused by yellow cedar death . Young mountain hemlock seedlings, for example, grow best in partial shade , likely explaining why this species regenerated relatively well as saplings in recent mortality before canopy openness increased further. In contrast, western hemlock is known to tolerate a wide range of soil and light conditions for establishment and growth and seeds prolifically, as does Sitka spruce . Species can also respond to varying light conditions with differential growth responses. Western hemlock reached maximum growth rate when exposed experimentally to relatively high light intensities, whereas bunch berry responded most strongly to relatively low light intensities .

Water on Mars has always been of interest for physical and chemical reasons

The amino acid α-carbon provides for two mirror image configurations based on the relative orientation of the side group . Terrestrial biological amino acids consist of only one configuration , however, there is no reason why proteins in extraterrestrial life would need to be based on L-amino acids as on Earth. Proteins as catalytically active as their natural biological L-amino acid counterparts have been synthesized of entirely D-amino acids , thus it is assumed that life elsewhere could be based on either L- or D-amino acids. Amino acid homochirality associated with extant terrestrial life changes over time after the bacterial community becomes deceased due to racemization. When living, the protein turnover time is sufficient to preserve the homochiral protein composition, however after death, the amino acids interconvert from the biological L-enantiomer to the abiological D-enantiomer. This interconversion is a natural process that becomes significant over geological timescales and continues until they are present in equal abundances, that is a D/L-ratio equal to 1. The D/L-enantiomer ratio along with known rates of racemization has been useful in determining the geological age of terrestrial samples up to hundreds of millions of years old . Although racemization compromises the microbial signature of terrestrial proteins over geological timescales, the determination of amino acid chirality still offers a powerful biosignature for the presence of microbial life. The detection of amino acids alone is not unequivocal evidence of life,hydroponic nft channel rather a homochiral signature is necessary to confirm a biological amino acid source.

Although sufficiently old biological samples may show racemic signatures similar to those derived from abiotic syntheses, well preserved amino acids from extinct bacterial communities at extremely cold temperatures would still show good chirality preservation for hundreds of millions of years. In future life detection experiments, the chirality of amino acids should easily discriminate between biological amino acids and those which may have formed abiotically or derived from meteorite influx.Known abiotic pathways exist for the formation of amino acids such as spark discharge experiments and laboratory hydrothermal syntheses, however they are all known to produce equal amounts of D- and L- amino acids in low concentrations. This marked difference between homochiral biological and racemic abiotic composition permits the discrimination of the source of the detected amino acids by resolving their enantiomeric abundances . Also important is that abiotic amino acid syntheses tend to form a relatively small suite of amino acids compared to those utilized in bacterial proteins. The suite of protein amino acids utilized in the bacterium E. coli is evaluated in Chapter II and compared to previous empirical studies. If detected amino acids are too old or degraded for any chiral signature to be deduced, the distribution can be used to definitively decide the source of the amino acids as microbially or abiotically derived. A variety of amino acids have been detected in meteorites as well, but these are interpreted as having formed during parent-body processes. The fact that amino acids within meteorites are all racemic , and that they show a suite of amino acids similar to those formed in abiotic syntheses, makes them easy to distinguish from biologically sourced amino acids again based on chirality or distribution. Any preferential dominance of Lamino acids detected in meteorites is assumed to be due to terrestrial contamination . Certain amino acids within the large suite of amino acids detected in meteorites are not components of terrestrial proteins, rather they are known to be indigenous because they are unique to meteorites and reflect formation during parent-body processes .

The two most abundant extra-terrestrial amino acids are isovaline and aminoisobutyric acid , however there are a variety of others that are recognized as a indicative of an extraterrestrial signature . The presence of these amino acids in geological samples is suggested to reflect deposition by during a period of high meteoritic influx.The most relevant amino acid biomarkers depend on their relative abundances in bacterial proteins and the stability of the individual residues. There are two major amino acid diagenetic pathways, degradation and racemization . The rates associated with degradation are slower than racemization by at least a factor of 100 in most cases. The most stable protein amino acids will persist through geological time and allow for the quantification of long extinct bacterial communities. Amino acids degrade primarily by decarboxylation or deamination but other processes like dehydration and aldol cleavage can also be significant .The most commonly occurring amino acids in ancient and degraded microbial communities are glycine and alanine , a finding corroborated by the analyses of natural samples of anoxic sediments . Glycine and alanine are both present in bacterial communities in very high abundances, however, only alanine shows degradation rates among the slowest of the amino acids . This implies that glycine may be better preserved in geological settings or that diagenetic pathways lead to its secondary formation from other compounds. Regardless, both alanine and glycine remain among the most important amino acids to assay for in geological samples as well as asp, glu, and ser. Valine, present in lower abundances than these other amino acids, shows slow racemization kinetics and degradation kinetics and should show good preservation in environmental samples despite composing only ~5% of total bacterial protein. The plots in Figure 1.6 show the evolution of aspartic acid concentration and D/L-ratio versus time. Aqueous rates of aspartic acid degradation and racemization were used in these models and therefore represent the fastest rates of these reactions.

Racemization is a much faster process which results in a racemization half-life of ~2,200 years whereas the half-life of aspartic acid degradation is ~10,000 years. Environmental samples always show slower degradation and racemization in colder and dryer conditions.The aspartic acid racemization rates in dry environmental conditions have been reported to be as slow as 1.20 x 10-6 yr-1 and 6.93 x 10-7 yr-1 , equivalent to half-lives of ~600,000 and 1,000,000 years, and therefore must be evaluated carefully for each geological sample for the purposes of amino acid racemization age dating . Likewise, any degradation reactions are equally dependent on the mineralogy of the environmental sample and may be accelerated by the presence of metal ion catalysts . Racemization age dating has been suggested to be applicable to amino acids from hundreds of thousands to millions of years old at low temperatures, but this range can be extended to older samples under colder conditions. Likewise, amino acids from hundreds of millions of years old up to billions of years could be well preserved under the appropriate environmental conditions . Target bio-molecules in the search for evidence of life on Mars must be stable enough to persist for geological timescales so that evidence of life on Mars does no go undetected. The fate of amino acids includes racemization, degradation, and bacterial uptake. In the absence of biological processing , racemization is faster than degradation by at least 100x. Racemization involves a planar carbocation intermediate formed by the loss of a proton on the α- carbon and subsequent attack on the top or bottom by another proton . The reactions for the destruction of amino acids include decarboxylation to amine compounds or deamination. These rates are highly matrix and temperature dependent and therefore must be evaluated for the specific environmental conditions. Although the prevailing cold and dry conditions on Mars tend to drastically increase the lifetimes of organic degradation and amino acid racemization , there are other effects that must be considered. For instance,nft growing system the surrounding mineral matrix can catalyze amino acid diagenetic reactions, especially degradation in the presence of metallic ions . Therefore, the specific preservation of organic material will be strongly a function of chemical environment.If intact amino acids are detected and show an abundance of one enantiomer over the other, this would unequivocally show that the source of these amino acids was biological. If these biomarkers from extinct life on Mars were to have been degraded over geological timescales, there are certain classes of compounds that we would expect to be diagenetic end products or intermediates. Compounds such as humic acids and kerogen are products of the diagenesis of organic matter over time, however there may be generation of other diagenetic products due to the slow degradation of amino acids over time that might indicate what might be favored on Mars in terms of diagenetic products. For instance, decarboxylation is known to be a the primary degradation reaction amino acids such as glycine, alanine, and valine and form their corresponding amine degradation products .

The study of organic inclusion in terrestrial Martian analogs allows for the characterization of similar types of environments on Earth as detected on Mars so that we can understand some of the processes that might be important on Mars. The study of organics in Mars analog minerals can offer an idea of the sequestration potential and stability of these deposits on Earth. Indeed if Mars really experienced warm and wet climate early in its history , it may have been more similar than we realize to Earth and may have a lot in common with many of the proposed Mars analog locations. The determination of the stability within terrestrial Mars analog minerals can help to approximate biochemical stability that might be expected on Mars. It turns out that low levels of amino acid degradation products that indicate diagenetic processes can often be used to determine the stabilities or diagenetic state of the included amino acids.Figure 1.9 shows the general geological history of Mars dominated by an early wet era in which clays were formed by water alteration. There was a catastrophic climate change around ~3.5 billion years ago. Water is a medium for interesting chemistry to occur, it provides a location for the origin of life, and geologists can use water abundance to explain many of the erosive features on Mars. Therefore, preservation becomes the key issue when talking about finding evidence of life on Mars. The evidence of an extinct Martian biota might be from a biological community billions of years old and must show good preservation over the history of the samples. The idea of using chirality as a bio-signature was first proposed by Halpern to search for evidence of life on Mars. This idea has resurfaced in the current strategies for life detection recognizing for over 30 years the strength of amino acid chirality as a biosignature and discrimination versus abiotic amino acid signatures . On Mars, racemization kinetics are expected to be extremely slow because of the cold, dry conditions, and any chiral signature of extinct life should be preserved for billions of years . The harsh surface conditions on Mars may limit the survival of some organics within the host regolith . Because amino acid diagenesis is so intimately linked with matrix effects, the study of amino acid preservation and diagenesis in terrestrial Mars analogs is necessary to make predictions on the best locations to search for biosignatures on Mars. Extrapolation of these diagenetic reaction rates to Mars’ surface temperatures can allow for estimates of amino acid stability and rates of diagenetic reactions on Mars.This dissertation covers my investigations of organic inclusion and sequestration within various Mars analog minerals. Throughout these studies, amino acids are investigated for their applicability as biomarkers for the detection of extinct or extant microbial communities on Mars. A variety of environments that have been suggested as analogous to Mars for mineralogical or climatic conditions are profiled and in some cases, rate data is gleaned from the coupling of amino acid degradation reactions and extrapolated to predicted rates on Mars. The stabilities of amino acids in analog minerals essentially sequesters them and offers some degree of protection from harsh surface conditions, allowing for enhanced preservation in some cases. Specifically, these studies investigate amino acid diagenetic reactions including racemization and degradation to try and predict the degree of survival of these biosignatures over geological timescales upon the surface of Mars. Chapter 2 characterizes the amino acid composition in bacteria and verifies our methods of analysis used in these studies. The amino acid distributions and concentrations are so markedly different from any type of abiotic formation process that discrimination between these processes should be possible even over very long timescales. Chapter 3 introduces a new chemical chronometer based on the detection of amino acid degradation products within ancient geological samples.

The biological mechanism behind this winter recovery has been studied but is not fully resolved

Infections that occur during spring lead to chronic disease ; however, infections that occur during late summer and fall may cause disease symptoms in the current year, but a high proportion of vines lack symptoms of X. fastidiosa infection in the following year . Nonetheless, models that incorporate low temperatures have substantial explanatory power in predicting rates of winter curing of X. fastidiosa infections in grapevine . Infections that occur early in the season may have a longer period during which X. fastidiosa can colonize and reach high infection levels, which may increase the likelihood of the disease surviving over the winter. Following this rationale, if most late-season infections remain in the distal ends of shoots and have lower infection levels, removing the symptomatic portion of the vine might eliminate X. fastidiosa. In other words, the efficacy of pruning infected grapevine tissue could depend on both the time of year in which the plant was infected and on winter temperature. A potential benefit of severe pruning versus replanting is that pruning leaves a mature root stock in place, which is likely to support more vigorous regrowth compared to the developing root stock of a young transplant . Recent attempts to increase vine productivity by planting vines with more well-developed root systems are based on this presumption. However, even if severe pruning can clear vines of infection,grow bag for blueberry plants it removes a substantial portion of the above ground biomass of the vine. Thus, a method for encouraging rapid regrowth of the scion after aggressive pruning is needed. We studied the efficacy of pruning infected vines immediately above the root stock graft union—the most aggressive pruning method—for clearing grapevines of infection by X. fastidiosa.

We reasoned that if such severe pruning was ineffective at clearing vines of infection, less severe pruning would not be warranted; if severe pruning showed promise, less severe pruning could then be tested. We use the term “severe pruning” to refer to a special case of strategic pruning for disease management, analogous to the use of “remedial surgery” for trunk diseases . To test the efficacy of clearing vines of X. fastidiosa infection, we followed the disease status of severely pruned versus conventionally pruned vines over multiple years, characterized the reliability of using visual symptoms of PD to diagnose infection, and compared two methods of restoring growth of severely pruned vines.Pruning trials were established in Napa Valley, CA in commercial vineyards where symptoms of PD were evident in autumn of 1998. The vineyards used for these trials varied in vine age, cultivar, and initial disease prevalence . All study vines were cordon-trained and spur-pruned. We mapped the portions of the six vineyards selected for study according to evaluation of vines for disease symptoms. The overall severity of PD symptoms for each vine was recorded as follows: 0 = no symptoms, apparently healthy; 1 = marginal leaf scorch on up to four scattered leaves total; 2 = foliar symptoms on one shoot or on fewer than half of the leaves on two shoots on one cordon, no extensive shoot die back, and minimal shriveling of fruit clusters; and 3 = foliar symptoms on two or more shoots occurring in the canopy on both cordons; dead spurs possibly evident along with shriveled clusters. To test the reliability of the visual diagnosis of PD, petiole samples were collected from the six vineyard plots when symptom severity was evaluated for vines in each symptom category; these samples were assayed using polymerase chain reaction . Petioles were collected from symptomatic leaves on 25, 56, and 30 vines in categories 1, 2, and 3, respectively.

Next, severe pruning was performed between October 1998 and February 1999 in the six vineyard plots by removing trunks of symptomatic vines ~10 cm above the graft union. Cuts were made with saws or loppers, depending upon the trunk diameter. During a vineyard survey, severe pruning was conducted on 50% of vines in each symptom category; the other 50% of vines served as conventionally pruned controls. Sample sizes for control and severely pruned vines in each disease category ranged between six and 62 vines depending on the plot, with at least 38 total vines per plot in each control or pruned treatment. In spring 1999, multiple shoots emerged from the remaining section of scion wood above the graft union on severely pruned vines. When one or more shoots were ~15 to 25 cm long, a single shoot was selected and tied to the stake to retrain a new trunk and cordons, and all other shoots were removed at this time. We evaluated the potential of severe pruning to clear vines of infection, by reinspecting both control and severely pruned vines in all six plots for the presence or absence of PD symptoms in autumn 1999 and 2000. In all plots, category 3 vines were inspected in a third year ; in plot 6, vines were inspected an additional two years . Finally, in plot 6 we investigated chip-bud grafting as an alternate means of ensuring the development of a strong replacement shoot for retraining. To do this, 78 category 3 vines were selected for severe pruning, 39 of which were subsequently chip-bud grafted in May 1999. An experienced field grafter chip budded a dormant bud of Vitis vinifera cv. Merlot onto the root stock below the original graft union, and the trunk and graft union were removed. The single shoot that emerged from this bud was trained up the stake and used to establish the new vine. The other 39 vines were severely pruned above the graft union and retrained in the same manner as vines in plots 1 to 5. Development of vines in plot 6, with and without chip-bud grafting, was evaluated in August 1999 using the following rating scale: 1) “no growth”: bud failed to grow, no new shoot growth; 2) “weak”: multiple weak shoots emerging with no strong leader; 3) “developing”: selected shoot extending up the stake, not yet topped; and 4) “strong”: new trunk established, topped, and laterals developing. All analyses were conducted using R version 3.4.1 .

We used a generalized linear model with binomial error to compare the relative frequency of X. fastidiosa-positive samples from vines in the different initial disease severity categories . Next, we analyzed the effectiveness of chip budding versus training of existing shoots as a means for restoring vines after severe pruning. This analysis used multinomial logistic regression that compared the frequency of four vine growth outcomes the following season: strong, developing, weak, or no growth. This main test was followed by pairwise Fisher exact tests of the frequency of each of the individual outcomes between chip budded-trained and trained vines . We analyzed the effect of severe pruning on subsequent development of PD symptoms using two complementary analyses. First, we compared symptom return between severely pruned and control vines in the three symptom severity categories for two years after pruning. To appropriately account for repeated measurements made over time, our analysis consisted of a linear mixed-effects model with binomial error, a random effect of block, and fixed effects of treatment , year ,blueberry grow bag and symptom severity category . Next, we analyzed the rate at which PD reappeared in only severely pruned vines from category 3 in subsequent years using a survival analysis. Specifically, we used a Cox proportional hazards model with a fixed effect of plot .Accurate and time- or cost-efficient methods of diagnosing infected plants are important elements of a disease management program, both with respect to roguing to reduce pathogen spread , and the efficacy of pruning to clear plants of infection . Accurate diagnosis of PD in grapevines is complicated by quantitative and qualitative differences in symptoms among cultivars and other aspects of plant condition . Our results suggest that a well-trained observer can accurately diagnose PD based on visual symptoms, particularly for advanced cases of the disease. The small number of false positives in disease category 1 and 2 vines may have been due to misdiagnosis of other biotic or abiotic factors . Alternatively, false positives might indicate bacterial populations that are near the detection limit; conventional PCR has at least as low a detection threshold as other methods that rely on the presence of live bacterial cells . Regardless, although scouting based on visual symptoms clearly captured most cases of PD in the current study, some caution should be used when trying to diagnose early disease stages to ensure that vines are not needlessly removed. There is no cure for grapevines once infected with X. fastidiosa, except for recovery that can occur in some overwintering vines . The virulent nature of X. fastidiosa in grapevines, and the corresponding high mortalityrate for early season infections, increases the potential value of any cultural practices that can cure vines of infection. Moreover, new vines replanted into established vineyards generally take longer to develop compared to vines planted in newly developed vineyards, potentially due to vine-to-vine competition for resources that limits growth of replacement vines. As a result, vines replanted in mature vineyards may never reach full productivity . Thus, management practices that speed the regeneration of healthy, fully developed, and productive vines may reduce the economic loss caused by PD . A multinomial logistic regression showed significant differences in the relative frequency of different grapevine growth outcomes between the two restoration methods .

Chip-budded vines showed significantly lower frequency of strong growth and significantly higher frequencies of vines with developing growth and, especially, of no growth . Nearly 30% of chip-budded vines showed no growth in the following season, compared to 0% of vines on which established shoots were trained. These results indicate that training newly produced shoots from the remaining section of the scion was more likely to result in positive regrowth outcomes. As a result, of the two methods we evaluated, training of shoots that emerge from the scion of a severely pruned trunk is recommended for restoring growth. However, it is important to note that the current study did not estimate the amount of time required for severely pruned vines to return to full productivity. Moreover, the study did not include mature vines, in which growth responses may differ from young vines. Additional studies may be needed to quantify vine yield, and perhaps fruit quality, in severely pruned vines over multiple seasons. The usefulness of pruning for disease management depends on its ability to clear plants of pathogen infection . A comparison of symptom prevalence among severely pruned and control vines from different disease severity categories showed significant effects of the number of years after pruning , pruning treatment , and initial disease symptom category . The analysis also showed significant interactions between year and treatment and between treatment and symptom category , a non-significant interaction between year and symptom category , and a marginally significant three-way interaction . Overall, more vines had symptoms in the second year compared to the first , and there was a higher prevalence of returning symptom in vines from higher initial disease categories . Severe pruning showed an apparent benefit to reducing symptoms of PD after the first year, but this effect weakened substantially by the second year, with no differences for category 1 or 3 vines, and a slightly lower disease prevalence for severely pruned category 2 vines . A survival analysis of severely pruned category 3 vines showed a significant difference in the rate of symptom return among plots . All vines in plots 1 to 3 had symptoms by autumn 2000, two years after pruning . In plots 4 and 5, more than 80% of vines showed symptoms after three years. Only plot 6 showed markedly lower disease prevalence; in plot 6, ~70% and 50% of severely pruned category 3 vines showed no symptoms after two and four years, respectively, versus ~36% of control vines overall, after two years. It is important to note that at the time of this study, disease pressure may not fully explain the return of symptoms in severely pruned vines.

Interspecific interactions between bee species can also increase honey bee efficiency

We also determine whether hedgerow presence affects wild bee abundance and richness in sunflower fields, and if this, in turn, translates into increased sunflower seed set.Field sites were located in Yolo County, an intensively-farmed agricultural region of California’s Central Valley that contains a mixture of conventionally managed row and orchard crops. The majority of natural and semi-natural habitat in the county is concentrated around the borders of agricultural lands and not embedded within them . We sampled 18 sunflower fields between June and July . Half of the fields were adjacent to bare or weedy edges , and half were adjacent to hedgerows . Sites were paired based on the timing of the sunflower bloom, the sunflower variety , and landscape context. Field pairs were a minimum of 900 m apart to maintain independence . To avoid contamination of varieties, sunflower fields are moved every year; therefore no field was sampled in multiple years although two fields were adjacent to the same hedgerow in different years.In Yolo Co., acreage planted in sunflower has increased by over 55% during the past 5 years . It is the 8th most-planted crop in the region, grossing nearly $28 million USD in 2013 . It is produced mainly for hybrid seed, which is then grown for oilseed or confection. While sunflower is native to North America, the breeding system of sunflower grown for hybrid seed has been altered to be artificially gynodioecious, with separate male-fertile plants and male-sterile plants. For hybrid seed production, rows of male plants are interspersed with rows of female plants. Wild bees predominantly visit male plants to collect pollen for nest provisioning . Although honey bees visit both male and female plants,grow bag gardening workers typically either collect nectar from female plants or pollen from male plants which limits cross pollination events .

Honey bee movement between pollen and nectar producing rows of sunflower is often spurred by interference interactions with wild bees. When a wild bee and honey bee meet on a sunflower head, one or both fly to different sunflower heads or rows . These interactions that increase pollen flow between rows also increase honey bee per visit efficiency, therefore have great potential to heighten seed set . Honey bees were stocked at an average rate of approximately 100 hives per field, or 1.5 hives per acre . We did not evaluate pest management because sunflower fields managed by different companies used similar practices. For example, all companies used pre-emergent herbicides prior to planting and seeds were treated with insecticides and either a fungicide or nematicide. Other management practices, including fertilization, tillage, row width and ratio of male to female rows, are also similar between companies , although irrigation practices vary by field, with the majority using furrow irrigation.To quantify the landscape surrounding each site we created 18 land use categorizations . We then hand digitized National Agriculture Imagery Program within a 1 km buffer around study sites in ArcGIS 10.1 . To determine landscape effects on wild bee populations in sunflower, we examined the proportion of habitat within each buffer that could provide resources to wild bees . This included both natural habitats and altered habitats . Potential pollinator habitat around our study sites varied from 1 to 40%, with a median of 5% . Control and hedgerow sites were paired by landscape context to minimize differences.We established two 200 m transects within each field, perpendicular to the field edge or hedgerow and 100 m apart . We netted and observed pollinators at four distances along these transects: 10, 50, 100 and 200 m from the edge.

We varied the starting sampling location within fields and edges at each study site to reduce conflation of distance with temporal variation in bee foraging behavior, which peaks in the morning and late afternoon . Each site was sampled once, during peak bloom , on a clear day with wind speeds <2.5 m/s and temperatures >18  C between 08:00 h and 14:00 h. We visually observed visitation for 2 min each in two malefertile and two male-sterile 2 1 m plots at each distance. Within hedgerows and edges we haphazardly sampled floral visitors for 2 min in eighth plots containing floral blooms. Only insects that contacted the anthers or stigmas were recorded as floral visitors. We also recorded non-bee visits; these accounted for <1% of all visits and were, for simplicity, excluded from analyses. We were unable to identify bees to species in visual observations; therefore we classified them to citizen science categories from Kremen et al. of all records. We did not include feral Apis in our wild bee categorization because we were unable to distinguish them from managed Apis.Sunflower specialists are more effective pollinators of sunflower than generalist species . We therefore also investigated whether sunflower specialists were more abundant in hedgerow or control field edges using an independent data set from 26 hedgerows and 21 control edges in Yolo Co. . Floral visitors were netted for 1 h in hedgerows and control edges during 4–5 sample rounds between April and August in 2012–2013. We queried this specimen database for sunflower specialist bees, which we defined as primary oligoleges . To assess whether the amount of nearby sunflower in the landscape impacted sunflower specialist presence in field edges in the independent dataset, we constructed 1 km buffers around sites in ArcGIS 10.4 and recorded the proportion of sunflower fields around each site using pesticide spray records , which identify which crop is grown on each parcel, and the California crop improvement sunflower isolation map .We used a chao estimator to evaluate species richness within sites in the R package vegan . To determine the impact of hedgerow presence, field location , and surrounding pollinator habitat in the landscape on wild bee species richness and abundance we used general linear models with Poisson and negative binomial distributions respectively in the R package lme4 .

Both models included an interaction between hedgerow presence and field location. We used raw species richness because we only sampled each site once and some sites contained too few individuals for estimation or rarefaction . We also assessed factors influencing sunflower visitation rates by honey bees and wild bees. Hedgerow presence, distance from hedgerow, and their interaction, potential pollinator habitat and sunflower sex were independent variables. In species richness, abundance and visitation models, site nested within pair was included as a random effect. We evaluated the differences between the community of bees in control edges, hedgerows and crop fields using a per MANOVA on their Chao1 dissimilarities in the R package vegan . We then determined whether male and female sunflower specialist bees utilized hedgerows or control field edges using the independent data set . We modeled counts of bees as the dependent variable with a Poisson distribution in the R package lme4 . Hedgerow presence, proportion of sunflower and potential pollinator habitat within a 1 km radius, bee specialization on sunflower, bee sex and an interaction between specialization and hedgerow presence were the independent variables. Site nested within pair was included as a random effect. To determine which factors impacted sunflower seed set, we used negative binomial generalized linear models in the R package lme4 that accounted for over dispersion in the seeddata . We examined the effect of wild bee abundance and richness on seed set from net and visitation data separately. We used raw species richness because some site distance combinations contained too few individuals for estimation or rarefaction . In all models, sunflower seed set was the dependent variable. In the model for netted bees, independent variables were hedgerow presence, wild bee abundance, wild bee species richness, sunflower company, distance into the field from the edge, and an interaction between netted wild bee abundance and honey bee visitation . For the model including visitation rates, additional explanatory variables included aggregate wild bee visitation to male-fertile and male-sterile flowers, honey bee visitation, and an interaction term between wild bee visitation and honey bee visitation. Site nested within pair was included as a random effect in both models. All continuous variables were scaled /sd. We checked all variables for collinearity , and no collinear variables were included in any model. For example, sunflower head size was correlated with variety. However, varieties were specific to sunflower company, so only sunflower company was retained in the model.Measuring the levels of ecosystem services derived from field edge habitat management in a variety of contexts is critical to demonstrating their efficacy and flexibility. If services are highly variable over time or from site to site, costs may outweigh the benefits and limit the adoption of diversification practices . Although other studies have found that field-edge diversification increase pollinator populations both in edges and fields and enhance pollination services to crops in adjacent fields ,plastic grow bag we did not detect any differences in rates of seed set in sunflower fields adjacent to hedgerow or control edges.

Wild bee richness and an interaction between wild bee visitation and managed honey bee visitation, however, positively impacted seed set; yet these factors were not influenced by hedgerow presence. Proportion of pollinator habitat in the surrounding landscape did not influence the bee community visiting sunflower, despite a large body of evidence supporting strong positive landscape effects . We did find higher numbers of sunflower specialist bees in hedgerows than in control sites. Based on these findings, we conclude that sunflower in not a good candidate crop for field edge enhancements, at least in our study region, although they exhibit potential for supporting populations of sunflower pollinating bees. We detected distinct differences in community composition of wild bees present in edges versus fields. This difference was likely driven by the fact that the dominant bee species found within fields, sunflower specialists, were either rare visitors to or absent from both hedgerow and control edge habitats. We only sampled each site once, therefore increased sampling could lead to more convergence or divergence between bee communities in these habitats. There can be significant overlap between species found in MFC fields and adjacent hedgerows , however species composition in hedgerows has also been shown to more closely resemble bee communities in forest habitat than adjacent crop fields . One factor likely driving the differences in species composition in our study region is the absence of sunflower planted within hedgerows due to concerns about genetic contamination of sunflower crop varieties. Because female sunflower specialists collect only sunflower pollen to provision their nests, they may not be attracted to the resources in hedgerows during the sunflower bloom period, instead being drawn into fields . Nevertheless, assessment of the independent dataset indicated that hedgerows provide important floral resources to sunflower specialist bees, especially males. Male sunflower specialists have been observed investigating honey bees as potential mates, which increases honey bee movement from male-fertile to male-sterile sunflowers and increases their pollination efficacy . Male bees, therefore, likely contribute to the interactive effect between wild bee richness and honey bees on rates of seed set. We found a slight positive effect of wild bee species richness on seed set rates, indicating that a higher number of bee species benefits pollination function in sunflower. Functional complementarity between species can enhance fruit and seed production in a variety of crops . Bee foraging behavior and bee body size can influence within inflorescence foraging, leading to more complete pollination in a single flower . Bee foraging activity can also be affected by preferences for particular weather conditions , temperatures , or preferences for floral phenology leading to temporal complementarity. In almonds, wild bee presence increases the likelihood that honey bees will move between different rows, which leads to higher pollen tube initiation and subsequent fruit set . Both niche complementarity and interspecific interactions likely underlie the positive relationship we detected between richness and seed set . In agreement with past findings , we detected an interactive effect between wild bee and honey bee visitation on sunflower seed set. We did not, however, detect any main effects of wild bee and honey bee visitation, despite strong evidence that wild bees positively increase seed set regardless of honey bee abundance . In order to evaluate the direct contribution of wild bees, other studies have estimated the contribution of wild and honey bee visitation to seed set separately .

It is thus possible that other unknown chemical traits might also affect larval performance

The physical properties of different cultivars did not seem to affect the fly’s oviposition. The percentage of eggs that developed to adults decreased with the increasing egg density per gram of fruit probably due to intra-specific competition, and this was further confirmed by manipulating the egg density and using the same ‘Bing’ cherry cultivar as the tested host. Females preferred larger fruit for oviposition, which is consistent with the density-dependent survival as the large fruit support higher numbers of fly larvae per fruit. It is well known that many fruit flies employ a variety of fruit characters to assess host quality and tend to be more attracted to larger fruits. Female D. suzukii appears to be able to assess host quality based on fruit size, and this behavior would likely increase foraging efficiency per unit time. Though we recovered very low numbers of D. suzukii from damaged citrus fruits, our laboratory study showed the fly can oviposit into and develop from freshly damaged or rotting navel oranges. Kaçar et al. showed that D. suzukii overwinter in citrus, surviving 3–4 months when fresh oranges were provided as adult food or ovipositional medium, and field-emerged adults from soil-buried pupae could produce and oviposit viable eggs on halved mandarin fruit. Thus, citrus fruit likely play an important role as reservoirs in sustaining the fly populations during San Joaquin Valley winter seasons,square black flower bucket wholesale and in the spring, those populations may migrate into early season crops, such as cherries. We did not observe grape infestation in our field collections, and our laboratory trials showed a low survival rate of D. suzukii offspring on grapes when compared to other fruits .

The oviposition susceptibility and offspring survival could vary among varieties or cultivars due to the variations in skin hardness and chemical properties. For example, Ioriatti et al. demonstrated that oviposition increased consistently as the skin hardness of the grape decreased. Chemical properties, such as sugar content and acidity levels, may play a role in host susceptibility. In the current study, we found that although table grapes had a tougher skin than raisin or wine grape cultivars tested , females were able to lay eggs into all three types of grapes, often through the fruit surface or near the petiole . The sugar levels of all tested grapes were either equal to or considerably higher than other fruits tested. We also found that tartaric acid concentration negatively affected the fly’s developmental performance. Still, about 20% eggs successfully developed to adults in the diet mixed with the highest tartaric acid, whereas only 4.5% eggs developed from the wine grape cultivar tested. Overall, our results are consistent with other reported studies that grapes are not good reproductive hosts for D. suzukii.California’s San Joaquin Valley is one of the world’s most important fruit production regions, with a diverse agricultural landscape that can consist of a mosaic of cultivated and unmanaged host fruit crops. Such diverse landscapes result in the inevitable presence of D. suzukii populations that represent a difficult challenge for the management of this polyphagous pest.

We showed that only the early seasonal fruits, such as cherries, seem to be at greatest risk to D. suzukiiMany of other later seasonal fruits are not as vulnerable to this pest, because either their intact skin reduces oviposition, they ripen during a period of low D. suzukii abundance, or their flesh has chemical attributes that retard survival. However, some of these alternative hosts—such as citrus and damaged, unharvested stone fruit—may act as shelters for overwintering populations and provide sources for early populations moving into the more susceptible crops. Consequently, area-wide management strategies may need to consider fruit sanitation to lessen overwintering populations, suppressing fall and winter populations by releasing natural enemies, and reducing pest pressure in susceptible crops through ‘border-sprays’ and/or ‘mass trapping’ to kill adults before they move into the vulnerable crop. Alternative and sustainable area-wide management strategies such as biological control are highly desirable to naturally regulate the fly population, especially in uncultivated habitats. An understanding of the temporal and spatial dynamics of the fly populations would be of aid in the optimal timing of the future release of biological control agents to reduce the source populations in the agricultural landscape. Previous studies on natural competence in X. fastidiosa were based on a few strains from a single subspecies , although recombination among strains of other subspecies has been described . On testing natural competence in 13 different strains, almost ubiquitous natural competence ability was detected. The frequency of recombination varied among strains, even for a single genomic region , as in other naturally competent bacteria . Flanking region DNA similarity of the strains was not correlated with the recombination frequency, but most of the strains within a subspecies had identical flanking regions.

A clearer understanding on the rate of recombination and homology between recombining DNA could have been obtained by using donor DNA containing a different level of similarity with the recipient strain at a given recombination region. However, this was not performed in this study. Even if differences in recombination frequency, especially between strains of different subspecies, could be due to differences in homology between donor and recipient DNA sequences or differences in growth rates that showed positive correlation with recombination frequency, these parameters did not explain the non-competency of two strains that had average growth and similar sequence homology compared with the competent strains. Also, since growth of the strains was measured by OD and strains appeared to differ in the rate of cell-to-cell attachment, growth values could have been biased, especially for the strains that showed high rates of precipitation . This was not further investigated, as it was beyond the scope of this study. On further testing of other biological traits, twitching motility was significantly correlated with recombination frequency. Strain WM1-1 had the highest recombination frequency and showed highest motility among strains, while the two non-competent strains were nonmotile. Positive correlation between recombination frequency and twitching motility was also suggested in our previous study using different media components . Since components of type IV pili are involved in both natural competence and twitching motility in several naturally competent gram-negative bacteria , including X. fastidiosa ,plastic square flower bucket the activity of type IV pili could govern both of these phenomena. On further analysis of competence and pili genes, defective pili genes were detected in the noncompetent strains. One of the defective proteins detected was PilQ , a member of the secret in family that forms the secret in pore of the outer membrane and is involved in type IV pili biogenesis and importing extracellular DNA into the periplasmic space . Previous studies in X. fastidiosa have demonstrated that pilQ mutants are nonmotile and noncompetent . Hence, we predict that the insertion in the pilQ coding region is responsible for the lack of twitching and natural competence, as BBI64 is unable to secrete the type IV pili. The lack of type IV pili was confirmed by TEM imaging. Motility has been described as a major virulence trait for X. fastidiosa . BBI64 has no motility and WM1-1 has the highest motility in this study. Consistent with the critical role of twitching in virulence, BBI64 had reduced virulence while WM1-1 was highly virulent . A further observation supporting the correlation between twitching and natural competence was the fact that the Fetzer strain showed recombination in this study, while a mutant in the polygalacturonase gene pglA of this strain did not . On closer examination, Fetzer is motile while the pglA mutant is not . Sequence data showed that the pglA mutant had an insertion in pilM, a type IV pili biogenesis gene that was shown to be involved in twitching motility of Acidovorax avenae in a previous study , most probably causing the lack of movement in this strain. Additional factors could be involved in causing differences in natural competence of X. fastidiosa strains. X. fastidiosa genomes contain high levels of phage and phage-like regions , and natural competence could be a mechanism to help cells eliminate new integration of these regions by recombining the homologous DNA without phage sequences, as suggested by a recent study . Other studies have reported restriction-modification systems limiting transformation frequency .

In this study, although all donor plasmids were extracted from an Escherichia coli strain expressing X. fastidiosa DNA methyl transferase , it is possible that different strains, especially from different genetic backgrounds, possess different forms of R-M systems, which could lead to differences in the amount of DNA available for recombination, causing differences in recombination frequency. In this regard, a previous study has reported the inability of a plasmid isolated from a citrus-infecting strain to transform a grape strain , suggesting existence of specific recognition mechanisms to differentiate DNA from self or foreign sources. Sequence analysis and annotation of the X. fastidiosa Temecula1 genome predicts at least four different types of R-M systems . Future studies focused on these specific topics could explain the differences in recombination frequencies observed among X. fastidiosa strains.Differences in recombination frequencies based on genomic positions was previously reported in Ralstonia solanacearum , with positions containing recombination hot spots showing the highest frequency . In this study, higher recombination frequency was observed for pKLN61, a plasmid that recombines in the region of rpfF gene, a diffusible signaling factor involved in cell-to-cell communication of X. fastidiosa , compared with pAX1.Cm, which recombines at a neutral site , and pMOPB-Km and pMSRA-Km, which recombine at regions whose functions are being characterized. Differences in the length of the homologous flanking region and nonhomologous insert have been found to influence recombination frequency in a previous study . However, the upstream and downstream flanking region length was higher in pAX1.Cm , pMOPB-Km , and pMSRA-Km than for pKLN61 . The length of the nonhomologous insert between the homologous flanking regions was similar and the size of the plasmids is also comparable . Moreover, flanking region DNA sequence identity between the donor plasmids and recipient strains at these positions was also similar . This suggests that the difference in recombination frequency at different genomic position is not associated with the characteristics of plasmid regions, and it remains to be determined if this difference holds any evolutionary significance. Natural competence has been proposed to bring adaptive changes to the recipient bacteria, such as repair of damaged DNA and generation of genetic diversity that can lead to adaptation . For the generationof adapted strains, recombining regions should come from a more successful and genetically distinct donor . This could be possible when closely related but genetically different strains of a same species coexist in a single habitat. Detection of IHR in X. fastidiosa by MLST/MLSA studies supported this possibility. In fact, these studies proposed IHR leading to plant host shift of X. fastidiosa in citrus , mulberry , and blueberry . Moreover, mixed infection of the two subspecies have been suggested by previous studies. For example, almond leaf scorch strains isolated from the same orchard were found to be genetically different and were grouped into two different subspecies, i.e., subsp. fastidiosa and multiplex . Infection of a plum tree showing leaf scorch symptoms by subsp. multiplex and subsp. pauca strains was also reported in a recent study . Results of this and previous studies demonstrated that certain plants serve as hosts for strains from multiple subspecies. In addition, vectors of X. fastidiosa are found to be distributed worldwide in both temperate and tropical climates and, unlike with plant hosts, exhibit no specificity for pathogen genotype . In fact, a species of the vector was able to transmit four subspecies of X. fastidiosa . All these observations suggest that strains belonging to different subspecies may coexist within the same habitat , providing opportunities for recombination. Although IHR was detected between subspecies when whole genomes of the donor and recipient were mixed, recombinants did not differ significantly with the parent in virulence phenotypes, suggesting that recombination did not bring phenotypic changes. On analyzing the flanking homologous region of recombination, 0.7- to 4-kb regions were detected to have recombined, but the size could be greater, as up to 80 kb has been demonstrated to recombine by natural competence in R. solanacearum, with the recombinant strain showing increased virulence .

Early to mid second instar thrips show limited abdomen distention and have an overall pearlescent hue

Many hypothesize that because thrips feed with a punch and suck method, rather than direct chewing and mastication of leaf tissues, they do not receive toxic amounts of the Bt proteins . Alternatively, they may not possess the proper binding receptors for the Bt proteins tested to date and thus, no pore can be formed in the midgut lining and the Bt proteins are excreted . The literature indicates the latter hypothesis is more likely based on findings from life table parameters where development, fecundity, and adult longevity or relative abundance are not significantly different from thrips reared on Bt positive versus Bt negative corn, cotton, or potato plants. The aforementioned studies were not specifically looking at Bt effects on thrips nor were the Bti toxins tested here involved in previous studies involving thrips. The combinations of proteins used in thisstudy were, to date, unique pairings with thrips. It is indeed possible that there are no Bt endotoxins currently available that cause mortality to Thysanoptera. The LC50 with strain GHA was 8.61 x 104 conidia/ ml and was two orders of magnitude lower than for the other five B. bassiana strains tested . GHA also gave the only statistically valid dose-response values in probit analysis, and provided the only data that fit the probit model. The other B. bassiana strains failed to provide a linear relationship based on their p-values , i.e. the probit regression lines were of poor quality, except for GHA.Therefore, data were evaluated based on line slopes as is commonly seen in the scientific literature with other biological agents where data lines are not straight and do not fit the model .

Strains 1741ss, SFBb1, S44ss, and NI1ss showed a flat dose-response between concentrations, did not fit the model,flower harvest buckets and LC50’sranged from 2.7 x 106 – 9.6 x 108 . Assessment of Beauveria strain while adjusting for concentration, in both Logrank and Wilcoxon tests showed that strain and concentration had a highly significant effect on the infection rate. Multiple comparisons for the Logrank test to assess the strain effect while adjusting for the concentration differences showed that strains 1741ss, S44ss, 3769ss, and NI1ss infection rates were not distinct from one another. Strain GHA and SFBb1 had infection rates different from each other as well, and GHA had the fastest infection rate and SFBb1 showed the slowest kill rate . The Survival Distribution Function analysis coupled with the probit analysis clearly shows that GHA would be the best strain choice for citrus thrips control. Results with avocado thrips. The LC50 for strain GHA was 2.2 x 106 conidia / ml and was similar to that obtained with the other five B. bassiana strains tested . Again, because a strong linear response was not observed, the performance between strains was rated based upon the LC50 and relative linearity of the response. Based on overlap of confidence intervals, there were no significant differences between any of the strain LC50’s or LC95’s . Assessment of Beauveria strains while adjusting for the concentration, using both Log-rank and Wilcoxon analysis showed that strain did not have an effect on the infection rate. The multiple comparisons for the Log-rank test to assess the strain effect while adjusting for the concentration differences showed infection rates for all 5 strains were not distinct from one another . The Survival Distribution Function analysis coupled with probit analysis indicated there was no one best strain to select for avocado thrips management.

Citrus thrips were more susceptible to Beauveria than avocado thrips; citrus thrips LC values were much lower for the most active strain, GHA, indicating that significantly lower dosages of strain GHA were required to infect and kill citrus thrips compared with avocado thrips. The overall survival analysis results showed a similar pattern to the results of the probit analysis; GHA had the fastest infection rate and SFBb1 had the slowest rate . Infection rates for the other threestrain’s fit in between the rates for GHA and SFBb1, and 1741ss, S44ss, 3769ss, and NI1ss infection rates were not separable. This low dosage association and having the fastest infection rate suggest GHA is the best candidate for field-testing among the strains examined. Except for the worst performing strain, SFBb1, the performance of all of the strains with avocado thrips were similar. The LC50 value for citrus thrips was 8.6 x 104 conidia/ml, which may suggest economical feasibility in some cases, e.g., for use on organic products. The maximum recommended field application rate is 5.0 x 1012 conidia/ha. Therefore, 8.6 x 1011 conidia/ha of GHA is needed based on the estimated LC50 of 86 conidia/µl and this amount is reasonable to obtain in a field setting. Conducting the same analysis for avocado thrips control using GHA, with an LC50 of 2.2 x 106 , 2.2 x 1013 conidia/ha would be required. This is 4.4 times greater than the standard field use rate of GHA. We hypothesize that differences in susceptibility between citrus and avocado thrips may be due to the different habitats in which they evolved. Citrus thrips are adapted to hot and dry environments and thus, they are less likely to have evolved natural tolerance to fungi, whereas, avocado thrips thrive in a very wet environment where exposure to fungi is more likely. The differences may be due to different habitat adaptations and the different origins of the two thrips species . We find it interesting that two congenerics have such widely different habitat preferences and this may explain differences in fungal tolerance.

Differences were seen when citrus thrips and avocado thrips were placed on leaves of their associated host plants, then placed separately in sealed zip-lock bags where the moisture that condensed in the bags was lethal to citrus thrips but not to avocado thrips. Thus, it is possible that avocado thrips, due to their adaptation to living in cool and wet climates , have a higher tolerance to fungal pathogens, as they may encounter them more frequently than citrus thrips, which prefer a hot and drier climate . Many researchers have investigated alternatives to traditional insecticides such as biopesticides, i.e. natural or organismal methods of controlling pest populations. The utilization of entomopathogens against thrips is not a new concept; entomopathogenic fungi, such as, Metarhizium anisopliaeSorokin , Neozygites parvispora Remaudière & Keller , Verticillium lecaniiViegas , and Paecilomyces fumosoroseusBrown & Smith have also been used in laboratory and greenhouse trials with much success, whereas field trials have shown limited successes. However, various strains of B. bassiana have been shown to effectively control western flower thrips on greenhouse ornamentals and peppers , and several reports indicated that F. occidentalis, Thrips palmi Karny and T. tabaci Lindeman were successfully controlled under field or laboratory conditions . In conclusion, both citrus and avocado thrips can be infected by B. bassiana but high doses may be required, especially for avocado thrips. These high doses are difficult to obtain outside the laboratory and application of such doses would be costly. We believe B. bassiana is not a sufficiently effective alternative to traditional insecticides to warrant further study with avocado thrips,round flower buckets particularly because the commercially available strain GHA gave poor control on avocado thrips, but it may have potential against citrus thrips in an integrated pest management program. Further studies are warranted to determine if GHA could be used in field control of citrus thrips. Citrus thrips, Scirtothrips citri , has been recognized as a major pest of California citrus since the 1890s and is also known to scar mango fruits . Historically, high bush varieties of blueberries could only be grown in regions too cold for citrus production . However, breeding efforts to cross the northern high bush blueberries with several other Vaccinium species led to the development of heat-tolerant high bush blueberry varieties . This has enabled the establishment of a blueberry industry in the San Joaquin Valley, a region where both citrus and citrus thrips flourish . The known host range of citrus thrips has broadened and in recent years, they have become a significant pest of blueberries planted in the San Joaquin Valley of California . Citrus thrips feed on blueberry foliage during the middle and late portions of the season causing distorted, discolored, and stunted flush growth and poor development of fruiting wood required to obtain the subsequent crop.

Repeated pesticide applications of the few effective and registered pesticides to reduce thrips populations pose a concern regarding pesticide resistance management, and this issue is relevant not only to the blueberry industry but also for the 108,665 ha of California citrus which has experienced repeated documented cases of pesticide resistance in citrus thrips populations . Currently, there are no integrated pest management plans available for control of citrus thrips in blueberry, probably due to the recent nature of this crop-pest association. With a limited number of pesticides available for thrips control and the frequency of insecticide resistance shown by thrips, populations should be monitored carefully, treatments limited to populations of economic concern, and applications timed optimally . Appropriate cultural practices and conservation of natural enemies should be practiced in concert with the use of pesticides only on an as-needed basis. Understanding citrus thrips’ life history in the blueberry system to determine where and if susceptible stages could be exploited, is one of the first steps in the development of alternative methods to the use of traditional insecticides. In citrus, citrus thrips pupation occurs on the tree in cracks and in crevices, however, the majority of thrips drop as late second instars from trees to pupate in the upper layer of leaf litter below trees and move upward onto the plant after adult eclosion. Propupae and pupae are rarely seen, move only if disturbed, and do not feed. Pupation in the upper layers of the soil surface may create the ideal interface for control using the entomopathogenic fungus Beauveria bassianaVuillemin due to this vertical movement of the citrus thrips. However, blueberry plants have much different plant architecture than citrus trees and citrus thrips pupation behavior has yet to be studied on blueberries. In the U.S., pressure is increasing to move away from broad-spectrum insecticides and focus on alternative methods of control. Earlier work with B. bassiana determined that the commercially available strain, GHA , was the most effective of six strains tested in laboratory trials against citrus thrips . The goal of this study was to determine if this strain of B. bassiana could be utilized effectively against citrus thrips in California blueberry production. To achieve this objective, several factors of importance to fungal efficacy were evaluated before commencement of our field trial: 1) location of citrus thrips pupation in commercial blueberry plantings, 2) field sampling locations and methods, 3) fungal formulation and timing of application, and 4) density of product used and method of thrips infection. We then conducted a field trial evaluating the potential utility of the GHA strain of Beauveria bassiana in commercial blueberries for citrus thrips management as a possible alternative to the use of traditional insecticides. Citrus thrips were collected in Riverside County, Riverside, CA from wild laurel sumac, Malsoma laurina , a suspected major host for this species before citrus was introduced into the state . Thrips were collected via aspiration the morning of the bioassay and held in 15-dram plastic aspiration vials with a copper mesh screened lid. A small sumac leaf, just large enough to fit in the vial, was included to allow the insects to settle on the leaf and feed. In experiments where late second instar thrips were needed, i.e. thrips that were close to pupation, selected thrips were large and had darkened in color. Their abdomens appeared plump and the overall color of the thrips was a deep yellow with almost no opalescence. When adult females were used, selected females were of unknown age. Because of the complex arrangement and number of blueberry canes arising from the rhizome of commercial blueberry plants, we first evaluated movement of second instar citrus thrips on potted single cane blueberry plants in the laboratory. Known numbers of late second instar citrus thrips were released onto the leaves of potted blueberry plants in the lab.

The most difficult dimensions of that training are such basics as what is or is not a fruit or a vegetable

Past CDPS reports have shown that lower acculturation is associated with higher levels of intake of fruits and vegetables among Latinos.This raises the question if seasonal variation affects high and low acculturated Latinos differently. The modified 24-hour recall method used in the CDPS requires considerable effort and resources to implement correctly. A significant challenge exists in training “generic” commercial interviewers who generally do not have a nutrition background. Interviewers play a critical role in assisting respondents in assessing the number of servings correctly for different reported items . They also must aid respondents in deconstructing mixed dishes to determine the number of servings of the different components. Not all interviewers are able to master this equally and over the course of weeks of data collection there is degradation of knowledge due to a combination of rare occurrences of some food items and forgetfulness. As a result, significant effort is required for quality control monitoring of interviewers and periodic refresher training to make the CDPS method work. In implementing the CDPS method, the length of time used in one interview is dependent on 1) what the respondent understands, 2) how extensive or varied the respondent’s fruit and vegetable consumption is, and 3) the skill of the interviewer. A good interviewer could normally accomplish this task in five to seven minutes. Time, however, is an important factor in whether or not general health behavior surveys can afford to include multiple complexquestions on dietary intake,plastic planter pot especially in determining the number of servings of fruit and vegetables consumed.

A shorter form of the CDPS method that is less involved and less complex with regard to interviewer training would be a convenient substitute if it worked as well in producing population estimates. Such an approach, derived from the CDPS method and asking only three questions, is compared to the CDPS method as part of this study. The possible substitute method is called the “Short Form 3” or “SF3,” since it uses three short and direct questions . The study population is California adults, ages 18 years and older. Among these persons, those who self-identify as White, Latino, or African American and those who report that their total annual household income from all sources is $25,000 or less are oversampled. A dual frame design is used to locate these persons. The method of data collection is the same used in the CDPS, a telephone survey using computer-assisted telephone interviewing techniques. The main frame consists of all residential telephone numbers across the state. The second or supplemental frame consists of residential telephone numbers located in geographic areas with concentrations of Latino, African-American, and low-income households. Interviews completed in this supplemental frame are classified as “targeted,” since they are designed to maximize the chance of reaching the groups of interest. Sample sizes were designed to deliver an estimated sampling precision with a statistical power of 0.80 to discriminate with 95% confidence between groups if four-month seasons were defined and used . The combined four-month period was originally chosen a priori for design purposes. A single month is the minimum time frame used, since it was felt that the data might reveal seasons different than the a priori seasons described in the original proposal. The “seasons” were originally defined as summer , winter , and spring . To provide maximum flexibility in examining seasonal variation, independent samples were selected for each month of the year.

The samples were further stratified across two years for two reasons: 1) to smooth out any inter-year variation and 2) to expense data collection within the annual funding cap. Since analysis would be at the month level, the calculated sample size for any given month was divided between the month in Year 1 and the same month in Year 2. This is a rectangular sample design in that the number of interviews would be completed for each of the three race/ethnic groups per month for both the general population and for the low-income population. The original calculation based on the hypothetical four-month season was for 660 completed interviews per race/ethnic group per month. This is 165 interviews per month, each month divided over two years so that 83 interviews are collected for each of the four months in Year 01 and the same number per month in Year 02. This calculated number allows for discrimination down to 0.55 servings using a standard deviation from the 1999 CDPS of 3.5 servings for all groups for the hypothetical four-month season. Examining groups by combining all 12 months would have still greater discrimination . Due to the higher cost per case in obtaining low-income race/ethnic-specific interviews, the goal for all three low-income groups was set to be 400 interviews per hypothetical four-month season. At the same confidence level and power , 400 cases discriminates differences greater than 0.69 servings. For 12 months of data combined, an n of 1,200 cases will discriminate differences greater than 0.4 servings. This is a monthly sample size of 51 cases per month per year for each of the three low-income race/ethnic groups. Most CDPS data collected in the past cover the above so-called summer months of July through October. The general population survey has never been collected in the December through May period with the exception of the last two days in May during the very first CDPS in 1989. Some African-American over-sample cases have been collected as late as November .

The sample provides an identical snapshot of the California population and the sub-groups of interest inside of each month. It is important to note that an individual case may be used to satisfy different sample size objectives. For example, a low-income African American case located in the general population random-digit dial survey is used in a) the general population estimate, b) the low-income population estimate, and c) the African-American estimate. Individuals located through the targeted over-samples are only used for their group member estimates. The data collection instrument used in this study is the fruit and vegetable intake module of the CDPS. Also included are the five language-based acculturation questions asked of all Latino respondents . Descriptive, self-reported demographic data are collected to define gender, education, race/ethnicity, and income. The SF3 questions are implemented in half of the study population by random selection. Using the short form in only half of the sample allows for measuring and adjusting for any potential testing effect of placing these questions ahead of the CDPS module. Placing it after the module, however, would be counterproductive since the CDPS module walks each respondent through each meal on the previous day and details specific servings consumed. This would greatly influence the SF3 response toward a higher level of agreement with the module and result in overstating the agreement of the SF3 estimates. Data collection was conducted using CATI methods. Forty interviewers were trained, ten of whom were bilingual Spanish-English. Each month’s sample was administered so that 80% of the cases were collected during the first three weeks of the month and the balance of the cases, including hard-to-reach cases and any remaining refusal conversions, occurred in the final week of the month. The objective was to have the interviews spread as evenly as possible over the entire month and all cases in the sample for a given month completed within that month. If any month fell short of the target number of cases in Year 1, the difference was made up in the same month in Year 2. To accomplish this, sample management in Year 2 was very exacting. Each month’s cases were the result of an independently drawn, random sample of California households. Respondents were then randomly selected from among all eligible respondents in the household. Thus,30 litre plant pots this is a two-stage random sample design. Only the selected respondent would or could be interviewed. Interviews were conducted in either English or Spanish at the preference of the respondent. At least one subsequent refusalconversion attempt was made in households that refused to participate. At least nine contact attempts were made on each selected telephone number. Interviews were completed between November 1, 2000 and October 31, 2002. The average interview took 9.4 minutes to complete. This is just under the 10 minutes originally planned and budgeted. Of the 8,614 completed interviews, 1,249 were completed in Spanish. The overall response rate* was 26.5% for the general population survey, the refusal rate* was 5.6%.

These rates were computed by the data collection vendor based on their available disposition coding scheme. Inadequate tracking of disposition codes by the vendor for the targeted and low-income samples made response rate calculations unreliable for these groups. This makes all disposition codes suspect and may account for the lower than expected general population response rate. The data file of final cases was cleaned by the vendor and the fruit and vegetable codes added to the recorded fruit and vegetables in each data record. The data collection vendor’s final report is included in Appendix III. As done in the CDPS, interviewers entered the actual fruit, vegetables, salad ingredients, and the fruit and/or vegetable mixed dishes reported by respondents. These standardized entries were post processed by the data collection vendor using programming that read and converted these alpha entries to the numeric codes used by the CDPS. These codes are based on, although not identical to, USDA food codes. These codes had been updated during the course of this study using the 2001 CDPS data. This work was done by one of this study’s research assistants who was also a registered dietitian and was reviewed by the principal investigator in collaboration with Public Health Institute staff who works with the CDPS for the California Department of Health Services. The number of servings of fruit and vegetables was recorded by interviewers as whole numbers. Respondents reported a serving size as what is “usual” for them. All reports of a half a serving or greater were rounded up to the next whole number. For all amounts greater than one serving where the respondent reported more than the whole number, but less than an additional half serving , the number was rounded downward. The exception is when the amount reported was less than one-half serving. In this instance, the interviewer entered a zero. This is particularly true for such items as lettuce and tomato on a sandwich or on a taco . This is consistent with the CDPS. This study differs from the CDPS in one respect. The analysis of the CDPS data recodes the zero entries as quarter servings, while this study did not. It is the opinion of both the authors that the relative relationship among the groups studied and among months sampled remains unchanged when not recoding the zero entries, thus the data are still valid for purposes of this study’s objectives. An examination of the number of reported servings revealed, as expected, cases with unusually high numbers of servings. After cases with likely recording errors are dropped , it is generally accepted to top code outlier cases. Consuming a high number of servings of fruit and vegetables may not be unusual, particularly for vegetarians. However, to minimize the impact of these outlier cases on the computed mean values and variance calculations, it is typical to top code these cases to a determined value. Initially, weexplored computing outlier cutoff values using the same method employed by diet researchers at the National Cancer Institute in their work with National Health Interview Survey data.The method involves identifying the first and third quartile in the study’s data distribution. This is done independently for fruit and for vegetables after transforming the variable by using the square root of the number of servings. The maximum value for the total number of servings of fruit and vegetables combined is computed to be 20.43 servings, rounded down to 20 servings. However, the CDPS has, with experience of over a dozen years, top coded the number of servings of fruit and the number of servings of vegetables each at 10.0 servings. Following that convention and thus staying consistent with CDPS, all outlier cases for servings of fruit and for vegetables in this study were top coded at 10 servings. No case can thus exceed 20 servings of fruit and vegetables combined, and this is, coincidentally, the same number of servings computed earlier following the NCI method.

Consumers were instructed to sip bottled water between samples to cleanse their palates

Sixty berries per replication were then wrapped together in two layers of cheesecloth and squeezed with a hand press to obtain a composite juice sample. The juice was used to determine soluble solids concentration with a temperature-compensated handheld refractometer and expressed as a percentage. Twenty-one hundredths of an ounce of the same juice sample was used to determine titratable acidity with an automatic titrator and reported as a percentage of citric acid. Some samples that had a high viscosity were centrifuged with a super speed centri-fuge at 15,000 rpm for 5 minutes, in order to get liquid juice for soluble solids concentration and titratable acidity measurements . The ratio of soluble solids concentration to titratable acidity was calculated.Antioxidant capacity was measured in the 2005 and 2007 seasons. Eighteen hundredths of an ounce of berries per replication was used to determine the level of antioxidants by the DPPH free-radical method . Samples were extracted in methanol to assure a good phenolic representation, homogenized using a polytron and centrifuged for 25 minutes. The supernatant was analyzed against the standard, Trolox, a water-soluble vitamin E analogue, and reported in micromoles Trolox equivalents per gram of fresh tissue .An in-store consumer test was conducted on ‘Jewel’, ‘O’Neal’ and ‘Star’ blueberry cultivars in 2006, and on the six blueberry cultivars studied in 2007, using methods described previously . The fruit samples were held for 2 days after harvest at 32°F prior to tasting. One hundred consumers who eat fresh blueberries, representing a diverse combination of ages,black plastic nursery pots ethnic groups and genders, were surveyed in a major supermarket in Fresno County. Each consumer was presented with a sample of each blueberry cultivar in random order at room temperature, 68°F .

A sample consisted of three fresh whole blueberries presented in a 1-ounce soufflé cup labeled with a three-digit code. At the supermarket, the samples were prepared in the produce room out of sight from the testing area. For each sample, the consumer was asked to taste it, and then asked to indicate which statement best described how they felt about the sample on a 9-point hedonic scale . Consumer acceptance was measured as both degree of liking and percentage acceptance, which was calculated as the number of consumers liking the sample divided by the total number of consumers within that sample . In a similar manner, the percentage of consumers disliking and neither liking nor disliking the sample was calculated.Agricultural managed aquifer recharge is a recharge technique for groundwater replenishment, in which farmland is flooded during the winter using excess surface water in order to recharge the underlying aquifer . In California, for example, Ag-MAR is currently being implemented as part of the efforts to mitigate California’s chronic groundwater overdraft . Ag-MAR poses several risks for agricultural fields and groundwater that may influence its future adoption. This includes crop tolerance to flooding, soil aeration, bio-geochemical transformations, long-term impact on soil texture, leaching of pesticides and fertilizers to groundwater, and potential greenhouse gas emissions. Some of these issues have been addressed in recent studies of Ag-MAR, including soil suitability guidelines , nitrate leaching to groundwater , crop suitability and soil aeration . In the current study, we focused solely on the question of “how long can water be applied for Ag-MAR with minimal crop damage?”, while ignoring some of the above-mentioned challenges involving Ag-MAR implementation. Preferably, Ag-MAR flooding is done during fallow or dormant periods, when crop damage is potentially minimal, so agricultural lands can serve as spreading basins for groundwater recharge. Root zone residence time is defined as the duration that the root-zone can remain saturated during Ag-MAR without crop damage . RZRT is a crucial factor in Ag-MAR, as long periods of saturated conditions in the root-zone can damage crops due to oxygen deficiency or complete depletion of oxygen , which ultimately may result in yield loss . However, flood tolerance among crops varies considerably due to biotic and abiotic conditions , therefore only appropriate crops under specific conditions may be suitable for Ag-MAR application.

For example, Dokoozlian et al. have found that grapevine during dormancy can be flooded for 32 days each year without yield loss. Dahlke et al. recently investigated the effect of different Ag-MAR flooding schemes on established alfalfa fields. Results suggest a minimal effect on yield when dormant alfalfa fields on highly permeable soils are subject to winter flooding. On the other hand, some crops are sensitive even to short-period flooding. Kiwi vines for example, are highly sensitive to root anoxia with reported yield lost and vines death due to extreme rainfalls and/or shallow groundwater levels . In a study on peach trees, flood cycles of 12 h per day with 5 cm ponding, applied for two months, resulted in branches with lower diameter and length growth, as well as smaller, low-quality, fruits, compared to the control trees . The above examples demonstrate the need for an RZRT planning tool that can estimate Ag-MAR flood duration with minimal crop damage. Usually, when Ag-MAR water application starts, aeration of the rootzone will be quickly suppressed by a water-layer covering the soil surface, as it prevents oxygen transport to the root-zone in the gas phase. When water application ceases, re-aeration of the root-zone will depend on the soil’s drainage rate that controls the formation of connected air pores between the root-zone and atmosphere . Hence, proper estimation of the planned flood duration during Ag-MAR requires prior knowledge of both crop characteristics and soil texture. Only a few attempts for estimating RZRT during Ag-MAR were made, as Ag-MAR is a relatively new MAR technique. O’Geen et al. used a fuzzy logic approach to rate the RZRT during Ag-MAR, based on the harmonic mean of the saturated hydraulic conductivity of all soil horizons, soil drainage class, and shrink-swell properties. Their RZRT rating was combined with other factors generating a Soil Agricultural Groundwater Banking Index . Flores-Lopez et al. proposed a root-zone model that includes crop type, soil properties, and recharge suitability to estimate water application, flooding duration, and the interval between water applications.

Their model was integrated with a Groundwater Recharge Assessment Tool to optimize Ag-MAR water application. Here,greenhouse pot we propose a simple model to estimate the planned water application during Ag-MAR based on the following parameters: soil texture; crop saturation tolerance; effective root-zone depth; and critical water content. The concept of critical water content was proposed by several authors as it indicates a percolation threshold where the gas transport path is blocked by pore-water, which results in gas diffusivity and permeability of practically zero. Hence, when the water content is either below or above this threshold, gaseous oxygen transport into the soil is blocked or opened, respectively . As opposed to the previous Ag-MAR models mentioned above, our proposed model is physically based and includes explicitly the soil water content, that is used to infer the soil aeration status. Yet, thanks to its simplicity, this model can be integrated easily into various existing Ag-MAR assessment tools such as SAGBI or GRAT . In the following, we first describe the theory of the model and the methods used to test the model performance. Next, we present the model predictions and compare them with observations and numerical simulations. Last, we present an example of how to calculate Ag-MAR water application duration and we discuss the applicability of the model and its limitations. Plant tolerance to flooding or the duration of flooding with minimal crop damage is a very challenging parameter to estimate. A tremendous diversity of tolerance exists, which depends on several factors: soil texture and chemistry; degree and duration of hypoxia/anoxia; soil microbe and pathogen status; vapor pressure deficit , and root-zone and air temperatures; plant species, age, stage and season of the year; and plant adaption as a result of prior climate and soil conditions . An estimate of crop tolerance to flooding of common perennial crops is provided in Table 3, which is an extended version of a previous survey . Annual crops were not included in Table 3 because it was assumed that these fields usually would be fallow during winter and spring when excess surface water is available for Ag-MAR . Waterlogging tolerance in most fruit trees is mainly determined by the root stock and not by the scion , where tolerance is higher during dormancy, but more prone to damage during bud break and growth . The plant tolerance scales in Table 3 have different definitions, as some authors use plant survival as the tolerance criterion, while others consider economical damage as the tolerance criterion; these differences are indicated in Table 3. We note that the data provided in Table 3 should be used with caution because most of it is based on expert opinion or experiments with seedlings, while very few waterlogged experiments were conducted with bearing fruit trees. The fit between the predicted and observed effective saturation ranges from poor to excellent , and generally the fit is better for the RZRT models that underwent calibration followed by Hw1 and H5w. Obviously, a better fit of the predicted and observed water contents will lead to a more accurate estimation of twap.

Therefore, when possible, it is recommended to use the proposed RZRT model with site-specific hydraulic parameters. This is demonstrated in the Yolo silt loam soil, where a reasonable fit was not feasible without the use of site-specific hydraulic parameters . Note that site-specific parameters can vary considerably for the parameters obtained from the NCSS database. This is especially notable for the Ks values which can vary by more than one order of magnitude . The reason for this discrepancy is attributed to the low spatial representation of each soil series in the NCSS database, which is based on few soil pedons that are not always a good representation of the soil series where the field data was collected. In some cases, even when the overall effective saturation fit is poor, it is possible to estimate twap accurately given that the fit is good at the range of Sc. This is demonstrated in the Harkey loam for the H5w-fit . Note that for all soils H1w performs better than H5w, supposedly not as expected, because Rosetta3 is a hierarchical PTF where the highest hierarchy should perform better than lower hierarchies . As noted above, this is because each of the H5w parameters in this study was based on only one soil pedon sample from a specific location , which in this study was less representative compared to the Hw1 parameters that are based on averaging by soil texture a large number of soil samples. The fit of the effective saturation between the proposed RZRT model and the numerical model HYDRUS-1D ranges from good to excellent , and in all cases, HYDRUS fits better to the RZRT model than to the observed data. This indicates that the deviations between the RZRT model predictions and the observed water content data are probably due to soil layering, soil heterogeneity, and preferential flow, which cannot be captured by simplified homogenous one dimensional flow models. Another explanation for the observed and modeled water content deviations could be an inappropriate setting of the models’ boundary conditions. This mainly refers to the assumption of free drainage at the bottom boundary because the top boundary was controlled during the experiments. The performance of the RZRT model with a hard pan layer above or below the effective root zone was compared to HYDURS simulations with similar settings . According to our limited test, the RZRT model with the harmonic mean Ks is preferred during the infiltration period. During the drainage period the arithmetic or the harmonic mean Ks are preferred when the hard pan layer is above or below the effective root zone, respectively. As expected, when the hard pan layer is far below the effective root zone there is no impact on the effective root zone by the hard pan. The total water applied calculated with Eq. and the harmonic mean Ks is almost identical to the HYDRUS results . This demonstrates the impact of a hard pan on deep percolation, as the total water amounts applied were reduced by more than half when the hard pan layer was close to the root zone.

Tree density was less than half of values reported for well-drained sites in Canada

Per tree production , basal area and biomass were similarly less than half of values reported for well-drained sites in Canada. ANPP was about one third of values reported for stands in Manitoba and one quarter of values reported for Larch forests in Central Siberia and Scots Pine forests in Finland . Differences in growth allometry between our Alaskan stands and those from Northern Manitoba provide some evidence that the low productivity of Alaskan stands may be due to moisture stress. Regional mean annual precipitation was 30% lower in our Alaskan sites than in the Manitoba sites, indicating that available soil moisture may be lower in our sites. Our Alaskan trees were significantly shorter and had less stem mass per unit increase in DBH than their Canadian relatives. In black and white spruce stands across Canada, reduced height and shoot growth has been linked to soil moisture deficits . In response to water stress, trees may grow more wood per unit height , apparently to decrease the potential for embolism in xylem during periods of moisture stress . Our observation of changes in allometry and its influence on biomass also agrees with the observation that black spruce in Alaska may allocate more C below ground where moisture appears to be more limiting . Moss biomass began to accumulate surprisingly early in succession as indicated by the large increases in Ceratodon spp. and Polytrichum spp. over the first 4 year of succession in the 1999 dry site. Composition shifted to feather moss dominance in both the mesic and dry mature sites.

Because feather moss lack water-conducting tissues, it was surprising that its production was similar between mesic and dry sites despite an order of magnitude difference in moss biomass pools . As a result, ANPP per unit biomass, or production efficiency,raspberry container size was drastically lower in the mesic site, which may indicate lower light or nutrient availability in this site where mosses are both densely packed and beneath a closed canopy. Alternatively, it may indicate that that there is more brown moss in the mesic stand. Due to cool soils and moist conditions, decomposition of senescent moss may be slower in the mesic stand than in the dry stand, resulting in more intact brown material. Our measurements of moss biomass pools in the mesic mature stand were on par with green plus brown biomass pools in a black spruce/feather moss community in Washington Creek, AK , and twice as large as estimates for a similar community in Canada where only the green biomass was sampled .The growth of the “critical zone” paradigm has added impetus to closer investigation of soil-plant atmosphere interactions in ecohydrology . This follows from work emphasizing the importance of vegetation in regulating the global terrestrial hydrological cycle, with transpiration being the dominant “green water” flux to the atmosphere compared to evaporation from soils and canopy interception in most environments . More locally, the role vegetation plays in partitioning precipitation into such “green water” fluxes and alternative “blue water” fluxes to groundwater and stream flow has increased interest in the feed backs between vegetation growth and soil development in different geographical environments . The emerging consequences of climatic warming to changes in vegetation characteristics and the implications of land use alterations add further momentum to the need to understand where plants get their water from, and how water is partitioned and recycled in soil-plant systems . Stable isotopes in soil water and plant stem water have been invaluable tools in elucidating ecohydrological interactions over the past decade . Earlier work by Ehleringer and Dawson explained the isotope content of xylem water in trees in terms of potential plant water sources. Building on that, Brooks et al. showed that the isotope characteristics of xylem water did not always correspond to bulk soil water sources as plant xylem water was fractionated and offset relative to the global meteoric water line compared to mobile soil water, groundwater and stream flow signatures.

This led to the “Two Water Worlds” hypothesis which speculated that plant water was drawn from a “pool” of water that was “ecohydrologically separated” from the sources of groundwater recharge and stream flow . Research at some sites has found similar patterns of ecohydrologic separation and suggested it may be a ubiquitous characteristic of plant-water systems . Others have found that differences between plant water and mobile water may be limited only to drier periods , or may be less evident in some soil-vegetation systems . Direct hypothesis testing of potential processes that may explain the difference between the isotopic composition of xylem water and that of potential water sources has been advanced by detailed experiments in controlled environments, often involving the use of Bayesian mixing models which assume all potential plant water sources have been sampled . However, as field data become increasingly available from critical zone studies, more exploratory, inferential approaches can be insightful in terms of quantifying the degree to which xylem water isotopes can or cannot be attributed to measured soil water sources . As this research field has progressed, it has become apparent that extraction of soil and plant waters for isotope analysis is beset with a number of methodological issues . Soil waters held under different tensions may have different isotopic characteristics: for example, freely moving water sampled by suction lysimeters often shows a much less marked evaporative fractionation signal than bulk soil waters dominated by less mobile storage extracted by cryogenic or equilibration methods . Such differences between extraction techniques may be exacerbated by soil characteristics, such as texture and organic content, which may in turn affect the degree to which water held under different tensions can mix . Similarly, sampling xylem and its resulting isotopic composition has been shown to be affected by methodology. It is usually assumed that methods such as cryogenic extraction isolate water held in xylem, when in fact water stored in other cells may be mobilized to “contaminate” the results .

Interpretation of plant-soil water relationships can also be complicated by processes in plants and soils that alter isotopic compositions independently. For example, the spatio-temporal isotopic composition of soil water can change dramatically in relation to precipitation inputs, evaporative losses, internal redistribution and phase changes between liquid and gaseous phases . Moreover, there is increasing evidence that plant physiological mechanisms may affect water cycling and the composition of xylem water . These include effects of mychorrizal interactions in plant roots that may result in exchange and fractionation of water entering the xylem stream . Research also indicates that as flow in xylem slows, diffusion and fractionation can occur , which may involve exchange with phloem cells . Finally, there is increasing evidence that water storage and release from non-xylem cells may sustain transpiration during dry periods or early in the day , also affecting xylem composition. Thus, there is a need to understand the different timescales involved in uptake processes in the rooting zone, residence times and mixing of water in different vegetation covers . There is also evidence of differences between how such factors affect water movement in angiosperms and gymnosperms, as well as species-specific differences . Clearly, these methodological issues will take some time to address; in the interim there is a need for cautious interpretation of emerging data from critical zone studies in order to improve our understanding.A striking feature of isotopic studies of soil-vegetation systems is a bias to lower and temperate latitudes,raspberry plant container with northern latitudes and cold environments being under-represented . Yet, northern environments present particular challenges and opportunities to further advance the growing body of knowledge about plant-soil water interactions. For example, the coupled seasonality of precipitation magnitude and vegetative water demand can be complicated by the seasonality of the precipitation phase. Cold season precipitation that accumulates as snow can replenish soil water in the spring and be available to plants months after deposition . Despite the lack of studies, these areas are experiencing some of the most rapid changes in climate and, as a result, vegetation . The effects of climatic warming on patterns of snow pack accumulation and melt can have particularly marked consequences for soil water replenishment and plant water availability, particularly at the start of the growing season . Despite the importance of northern environments, remoteness and harshness of environmental conditions result in logistical problems that constrain lengthy field studies and data collection . This study seeks to contribute to the growing body of knowledge about plant-soil water interactions by expanding the geographical representation of sites in cold northern environments. We report the findings of a coordinated project on xylem water isotopic data collection in the dominant soil – vegetation systems of five long-term experimental sites. Isotopic characteristics of soil water have previously been reported for all five sites; this used a comparative approach with, as far as possible, common sampling methods across the sites for a 12 month period . Here, we present xylem water isotopic composition data collected using common methods over the same time period encompassing the complete growing season, and then relate findings to soil water isotopic compositions. The study was conducted at five long-term experimental catchments across the boreal or mountainous regions of the northern latitudes .

The catchments were part of the VeWa project funded by the European Research Council investigating vegetation effects on water mixing and partitioning in high-latitude ecosystems . Previous inter-comparison work on this project has examined such issues as changing seasonality of vegetation-hydrology interactions , soil water storage and mixing , water ages and modelling the interactions between water storage, fluxes and ages .At each site, plants and surrounding soils were sampled concurrently for isotope analysis following a common sampling protocol . Depending on the nature of the soil cover, the maximum depth of sampling varied from -20 cm at BB to -70 cm at Dry Creek . Sampling took place at 5 cm intervals for Bruntlad Burn, Dorset, and Krycklan with two to five replicates for each depth. At Dry Creek, sampling was done at -10, -25, -45, and -70 cm with two to four replicates. Sampling depths at Wolf Creek varied between -2 and -40 cm with one to three replicates. Daily soil moisture data based on continuous soil moisture measurements at 10 or 15 cm soil depth were available for each soil water sampling location at Bruntland Burn, Dry CReek, Krycklan, and Wolf Creek. Only weekly manual soil moisture measurements were available for Dorset, and daily soil moisture data were derived from soil physical modelling . The volumetric soil moisture data were used to assess the hydrologic state on the sampling days. Plant samples from trees with a diameter > 30 cm were taken horizontally with increment borers at breast height . Retrieved plant xylem cores were directly placed in vials without bark and phloem. Shrub vegetation was sampled by clipping branches. These were immediately placed in vials after the bark was chipped off or left on . All vials were directly sealed with parafilm and immediately frozen until extraction was conducted at Boise State University, Boise, Idaho, USA. There were five replicates for each species and day at the sites in Bruntland Burn, Krycklan, Dorset. At Wolf Creek, the number of replicates varied between two and five and there were always four replicates for each sampling campaign at the Dry Creek sites. In total, 1160 xylem water samples were collected; 831 for angiosperms and 329 for gymnosperms . Dates of sample events varied at each site, but included the end of the growing season/senescence, pre-leaf out the following year, post leaf out, peak growing season and senescence . Precipitation was sampled daily or on an event basis at Bruntland Burn and Krycklan. Daily to fortnightly precipitation sampling was conducted at Dorset, Dry Creek, and Wolf Creek. Melt water was sampled from lysimeters at Krycklan, Dorset, Dry Creek and Wolf Creek during several snow melt events, while snowfall seldom occurred over the study year at Bruntland Burn . Various measures were taken to prevent evaporation of collected precipitation, including paraffin oil and water locks prior to transfer to the laboratory. The long-term groundwater signal was assessed at all sites, apart from Dorset, using several sampling campaigns of springs and wells tapping the saturated zone over the last few years . There were no nearby wells from which to sample the regional groundwater at Dorset, which is found well below the surface in the granitic gneiss and amphibolite bedrock.