The same observations were made in five orchards where non-Apis bees were present

Blueberry phenolic compounds and more broadly, phytochemicals, exert regulatory effects including a decrease in proinflammatory gene expression/production in part through the modulation of the NF-κB pathway. A modulation of the MAPK pathway by blueberry phytochemicals is less evident with contradictory observations reported but may also play a role. Blueberry phytochemicals decreased DNA damage in cells in vitro, via the reduction of ROS production, lipid peroxidation, and an increase in antioxidant enzyme activities. Despite many in vitro studies on blueberry extracts, no specific compounds have emerged as singly responsiblefor the regulatory effects on inflammation and oxidative stress. Virtually all studies have focused on blueberry phenolic extracts or fractions, with a large emphasis on anthocyanins. Health effects of dietary anthocyanins have been extensively reported and discussed , and berries provide an excellent vector for anthocyanin consumption. Blueberries have a complex anthocyanin profile and both major anthocyanidin derivatives, malvidin and delphinidin, have demonstrated a reduction of inflammatory markers in different in vitro models of intestinal inflammation and endothelial dysfunction . Although it is highly likely that anthocyanins largely contribute to the health benefits provided by blueberries, as supported by the number of studies focusing on those compounds, it is doubtful that they are entirely responsible for the bio-activities. Several in vitro studies compared different fractions of blueberry phytochemicals,black plastic pots for plants with reports of similar or better effects by other phenolic fractions and/or whole blueberry extract compared with anthocyanins .

These different studies highlight that mechanisms of action of individual blueberry compounds and fractions are context and/or model specific. More studies comparing the effect on individual compounds and well-defined combinations of molecules in different systems are needed to investigate the impact of a system’s environment or system-specific regulation on the bio-activity of blueberries. Although the amplitude of the effect of individual compounds appears to be widely specific to the model studied, the use of whole fractions of the fruits seems to alleviate inflammation and/or oxidative stress more consistently across models, despite not always demonstrating the strongest effects compared with specific blueberry fractions. As the health effects of polyphenols have been extensively described, more data on other phytochemicals should be gathered as they may also exert health benefits. Other notable phytochemicals in blueberries include ascorbic acid , polysaccharides , and volatile compounds and could contribute to inflammatory or oxidative responses of cells to stimuli. A blueberry volatile extract, high in monoterpenes , modulated the inflammatory response in LPS-induced RAW 264.7 cells through inhibition of the NF-κB pathway . Phenolic compounds, although carrying anti-inflammatory and antioxidant modulatory effects, may not be solely responsible for the health benefits of blueberries. Whether the phytochemicals act in synergy or target different molecular pathways remains to be elucidated.Although the scope of this review is limited to blueberries, the anti-inflammatory and antioxidant effects and mechanisms are likely applicable to other commonly consumed berries. Berries are generally rich in polyphenols, particularly anthocyanins, flavonols, and proanthocyanidins, but the profile of each berry species, and even within varieties, harbors differences in terms of the individual compounds present and their respective concentration .

Gasparrini et al. reviewed in detail the anti-inflammatory effects of several berries in cellular models using LPSinduced inflammation, and consistently report alleviation of inflammation by berry phytochemicals through inhibition of NF-κB and MAPK pathways. Other reviews also discuss and compare the anti-inflammatory properties of berries, in preclinical and human models . Moore et al. and Gu et al. have reported similar anti-inflammatory effects of berry volatiles compared with phenolic extracts for cranberries, blackberries, blueberries, red and black raspberries, and strawberries. Notably, the bio-activities of berry polyphenol extracts do not always explain the overall anti-inflammatory effects observed with whole berries , highlighting that potential health effects of berries as a group derived from highly diverse phytomolecules. After consumption, blueberries and their phytochemicals undergo metabolism through phase II enzymatic reactions in the enterocytes and hepatocytes or microbial metabolism in the gut . Metabolites are more likely to reach target sites inside the body and exert health benefits than their parent compounds . Evidence of the role of blueberry metabolites in the modulation of inflammation and/or oxidative stress has also been established . Metabolites of elderberry were tested in RAW 264.7 and dendritic cells, and p-coumaric, homovanillic, 4- hydroxybenzoic, ferulic, protocatechuic, caffeic, and vanillic acids [also reported to be blueberry metabolites ], exerted a dose-response inhibitory effect on NO . Studies regarding berry catabolites are less abundant than studies on berry parent phytochemicals but have gained interest in more recent literature. These studies of microbeand host-modified phytochemicals are extremely important to fully understand the potential anti-inflammatory effects of blueberry consumption. Although most of the evidence focuses on the effect of individual compounds, it is essential to consider the potency of these metabolites in profiles similar to what occurs physiologically.

To take the compound profile and physiologically available doses into account, Rutledge et al. treated LPS-induced rat microglial cells with serum from subjects having regularly consumed blueberry, strawberry, or a placebo powder blends over 90 d. The blueberry consumption decreased NO production, TNF-α secretion, iNOS expression, and moderately modulated COX-2 protein expression in the cells . This type of design allows the integration of a more realistic profile of parent compounds and metabolites from blueberry consumption, at physiological doses, within a cell-culturebased model. The current review summarizes the extensive amount of literature available on blueberry phytochemicals and inflammation using cell-based models. This choice comes with limitations, since it can be challenging to interpret results using specific concentrations of berry-derived molecules on cells when concentrations of these metabolites at the site of the target organs may not be established. There have been major differences in concentrations used to treat the cells, ranging anywhere from tens of μg/mL to mg/mL for total polyphenols and from tens of ng/mL to ≤1.2 mg/mL for anthocyanin fractions. Some of these concentrations are much higher than the blood concentrations that would be present in the body after consumption, as bioavailability of anthocyanins in the body is estimated to be lower than 2%, and peaking at 100 nmol/L after consumption of grape/blueberry juice . The relevance of the findings of cell-culture-based studies in complex human systems needs further investigation. These studies should comprise of well-controlled clinical trials, with the relevant choice of placebo controls and inclusion criteria depending on the specific blueberry phytochemical and physiological condition investigated. Future studies should also quantify the entire suite of berry-derived molecules and derivatives in key pools such as the blood, concurrently with physiologic indices of inflammation and oxidative stress.Fruit, nut, and berry crops are commonly grouped into one of three categories: temperate, subtropical, and tropical. Temperate zone crops include almond, apple, apricot, peach, grape, blueberry, and strawberry . Avocado, citrus, and guava are considered to be subtropical, while banana, cashew, and pineapple are tropical . Generally, temperate and subtropical crops can be grown in San Mateo and San Francisco Counties ,drainage pot but tropical crops are rarely successful. This publication focuses on temperate and subtropical crops. Temperate zone crops generally require a period of cold temperature during the winter months for successful flower and fruit development. This cold temperature period is measured in “chill hours” . Some crops require many chill hours, while others require few. This is called the crop’s “chill requirement.” When selecting temperate zone crops, it is important to choose only those crops that have a chill requirement that will be met at your location. Subtropical crops, such as citrus, loquat, and guava, require little or no chilling. Native to warm-climate regions, these crops can be injured by cold temperatures during winter and spring months, and they require heat during the growing season for fruit maturation and flavor.There are a growing number of examples of a positive relationship between diversity and ecosystem services. As an ecosystem service, pollination can increase the fruit or seed quality or quantity of 39 of the world’s 57 major crops, and a more diverse pollinator community has been found to improve pollination service. For some crops, wild bees are more effective pollinators on a per visit basis than honey bees and/or can functionally complement the dominant visitor. A less explored reason is that in diverse communities, interspecific interactions potentially alter behaviour in ways that increase pollination effectiveness. Little is known about how community composition affects pollinator behaviour and the role such species interactions play in determining diversity–ecosystem service relationships. Interspecific interactions can result in non-additive impacts of diversity on ecosystem functions.

Examples include the facilitation of resource capture in diverse groups of aquatic arthropods, and non-additive increases in pest suppression and alfalfa production in enclosures with diverse natural enemy guilds. In diverse communities, one mechanism by which species interactions may augment function is the potential to modify the behaviour and the resulting effectiveness of the ecosystem service providers. Interactions with non-Apis bees cause Apis mellifera L. to move more often between rows of sunflower, increasing their pollination efficiency. Such changes in pollinator movement are particularly important in crop species with separate male and female flowers, and those with self-incompatibility . As well as direct interaction and disturbance, avoidance of interspecific chemical cues and resource competition have the potential to alter pollinator foraging movements. Global human population growth is putting greater pressure on agricultural production. There is concern over how to meet the increasing demand for food, while at the same time safeguarding ecosystems and biodiversity. In the future, land currently under agricultural production will have to be more intensively managed to increase yields and/or more land will have to be converted to agriculture. Given the negative impact agriculture has already had on biodiversity, it is important that future steps to increase production be made environmentally sustainable. In the last 50 years, the fraction of agricultural production requiring biotic pollination has more than tripled. When compared with crops that are not pollinator-dependent, those that are moderately pollinator-dependent have shown slower growth in yield and faster expansion in area from 1961 to 2006. Almond is a mass-flowering, varietally self-incompatible crop species, highly dependent on biotic pollination. Almond orchards are generally planted with alternating rows of two or more varieties. Planting a single variety per row facilitates harvest, but complicates pollination because pollen must be transferred between rows to achieve fruit set. To allow for management activities, trees between rows are further apart than those within the same row . Apis mellifera tend to forage within a tree and then move down the same row, probably because less effort is required to move to the next tree in the same row or because the rows act as visual markers that influence movement. This foraging pattern means A. mellifera tend to move more incompatible pollen, limiting their pollination effectiveness. In almond, we investigated whether the presence of nonApis bees affected the behaviour and pollination service of the dominant pollinator species, A. mellifera. Often almond orchards are isolated from natural habitat and non-Apis bees can be completely absent. Therefore, we were able to compare A. mellifera behaviour and pollination effectiveness in diverse bee communities with orchards lacking non-Apis bees. Here, we refer to pollinator effectiveness as the probability an ovule is fertilized following a single visit. We complemented our intensive field sampling with observations in a controlled cage environment, where A. mellifera were introduced along with the blue orchard bee Osmia lignaria Say. We hypothesized that where non-Apis bees were present, such as in sunflower , interspecific interactions would cause A. mellifera to more frequently move between rows. We further hypothesized that an increase in between-row movements by A. mellifera would increase their pollination effectiveness and increase fruit set .In 2011, A. mellifera movements were observed in five orchards isolated from natural habitat, where non-Apis bees were not present. The number of movements by A. mellifera was counted between two trees of different varieties across the orchard row for 1 min. This was repeated a minimum of four times, counting movements between the same two rows, between different adjacent trees. The number of movements by A. mellifera was also counted between two adjacent trees of the same variety within the same orchard row. These two counts were repeated a minimum of eight times down a row along adjacent trees .

The antioxidant capacity in blueberry is influenced by various metabolites including anthocyanins

To further examine the antioxidant capacity in ”Draper” during fruit development, fruits from the seven aforementioned fruit developmental stages were assayed for antioxidant levels . The highest level of antioxidants was observed at the earliest ”petal fall” stage after which, the level of antioxidants declined during the middle and late developmental stages. This is consistent with previous reports on the antioxidant activity in blueberry during fruit maturation and similar to observations in blackberry and strawberry, wherein green fruit have the highest ORAC values. Using the same fruit development series, we quantified anthocyanin and flavonol aglycones in ”Draper” using liquid chromatography-mass spectrometry . Overall, as the fruit changed its exocarp color from pink to dark blue during ripening, delphinidine-type anthocyanins started to accumulate and were the most abundant compound in ripe fruit followed by cyanidin, malvidin, and petuni-din . Flavonols were also detected in all developmental stages, with quercetin glycoside being the most abundant , while myricetin glycoside and rutin were present at very low levels. Blueberry also has high levels of phenolic acids; among phenolics, chlorogenic acid was the most abundant. High levels of CGA were observed throughout fruit development, with the highest accumulation detected in young fruits . This correlates with the pattern of antioxidant capacity across different fruit stages, 10 liter pot suggesting that CGA is one of the major metabolites contributing to high ORAC values in young developing fruit.

CGA is derived from caffeic acid and quinic acid and has vicinal hydroxyl groups that are associated with scavenging reactive oxygen species. The antioxidant properties of CGA have been associated with preventing various chronic diseases.To better understand the biosynthesis of antioxidants in blueberry fruit, we identified homologs of previously characterized genes in other species involved in ascorbate, flavonols, chlorogenic acid, and anthocyanin biosynthesis. The key bio-synthetic genes for these compounds exhibited a distinct developmental-specific pattern of expression . For example, genes involved in the conversion of leucoanthocyanidins to proanthocyanidins are highly expressed in the earliest and middle developmental fruit stages but not in ripening fruit . Conversely, genes involved in the conversion of leucoanthocyanidins to anthocyanins were highly expressed in mature and ripe fruit but not during early fruit developmental stages . Additionally, paralogs encoding the same anthocyanin pathway enzymes and genes involved in vacuolar localization of proanthcyanidins exhibited similar developmental stage-specific expression patterns. The expression of these bio-synthetic genes is regulated by specific transcription factors. For example, the transcription factor complex MYB-bHLH-WD regulates expression of anthocyanin biosynthetic genes in eudicots. Using the Plant Transcription Factor Database v.4.0, we identified homologs of transcription factors belonging to 55 gene families, and members of some of these gene families were predicted to be involved in the developmental regulation of flavonoid biosynthesis during blueberry fruit growth , including R2- R3-MYBs, R3-MYBs, bHLHs, and WDRs . These transcription factors also exhibit fruit development-specific expression patterns. In addition, we performed a gene co-expression network analysis to identify metamodules of genes that appear coregulated during fruit development, specifically genes that are associated with phytonutrient biosynthesis. Our analysis identified 1,988 meta modules of co-expressed genes, of which 428 metamodules contained at least one of the 57 Pfam domains that have been previously categorized as associated with specialized metabolic pathways in plants.

Our analysis revealed that 142 of 428 meta modules were more highly expressed in developing fruit compared to other plant tissues. Some meta modules showed clear trends of being highly expressed during either early or late fruit development. For example, METAMOD00377 is expressed early in fruit development and contains homologs to known anthocyanin genes OMT, HCT, PAL, and HQT as well as 31 homologs to known transcription factors. In contrast, METAMOD01221 is expressed late in fruit development and contains homologs of HCT, TT19, UFGT, and OMT and contains 10 homologs to known transcription factors. Moreover, we also examined meta modules for genes associated with other bio-synthetic pathways that impart unique blueberry fruit characteristics. We identified two meta modules where genes appear to be co-regulated. Meta module METAMOD00377, which contains Pfam domains associated with terpene, saccharide, and alkaloid specialized metabolism, and METAMOD01221, which contains terpene and saccharide metabolism. These meta modules contained genes that are differentially expressed during fruit development. Overall, the developmental-specific expression patterns of key biosynthetic genes and their putative transcriptional regulators emphasize the tight regulation of production, conversion, and transport of precursor compounds that lead to the accumulation of antioxidant-related metabolites in blueberry.The coregulation of genes involved in the biosynthesis of terpenes and saccharides during early and late fruit development described above reflects a coordinated interplay between these metabolites during fruit growth. Both terpenes and sugars contribute to the characteristic flavor of ripened fruit . In blueberry, two components play a central role in flavor perception: taste, which is a balance of sweetness and acidity, and aroma. Blueberry aroma is a complex blend of volatiles that include aldehydes, esters, terpenes, ketones, and alcohols. Previous reports in blueberry showed that the aroma profile varies greatly across different blueberry ecotypes and cultivars.

For example, the aroma of high bush blueberry is primarily driven by terpene hydrocarbons and aldehydes -2-hexenal, -2-hexenol, -3-hexenol. Both linalool and geraniol are associated with sweet floral flavor. However, linalool was reported to largely impart the characteristic blueberry flavor when combined with certain aldehydes . Here, we also identified and examined the expression of genes involved in the biosynthesis of linalool. Four of the linalool synthase homologs in tetraploid blueberry are highly expressed during late fruit development . This pattern of expression coincides with previous reports of linalool accumulation in ripened blueberry fruit. On the other hand, one homolog of linalool synthase, although it was expressed during fruit growth, did not show a clear fruit development-specific pattern. Investigating the underlying factors regulating these enzymes will facilitate genetic manipulations that may lead to further improving blueberry flavor in the future.Superior fruit quality is also associated with sugar levels. During fruit ripening, sugar levels of the endocarp increase by importing hexose symplastically and/or apoplastically. Sugar transporters , sucrose transporter, and tonoplast sugar transporter have been demonstrated to regulate intercellular sugar transport in phloem and fruit . In A. thaliana, all clade III SWEET play a role in sucrose transport, with AtSWEET9 primarily functioning in nectary secretion, while AtSWEET15 is required for seed filling by acting with SWEET11 and SWEET12. In blueberry, the clade III SWEET transporters 9 and 10 were highly expressed during early fruit growth, while clade III SWEET transporter 15 was mainly expressed in ripe fruit . Interestingly, one of the blueberry SWEET15 homologs showed a distinct pattern of expression compared to the other three homologs. To the best of our knowledge, we are the first to report on the potential role of these genes during blueberry fruit development. In addition, homologs of A. thaliana TST1 and watermelon ClTST1 and ClTST3 were expressed during fruit ripening in blueberry. Elevated expression of a ClTST1 homolog was observed throughout fruit development, but the ClTST3 homolog showed very low expression. Another gene that is highly expressed during fruit maturation is vacuolar invertase. As described in other systems,10 liter drainage collection pot its upregulation during fruit ripening coincided with the breakdown of starch to sucrose or a mixture of glucose and fructose, suggesting that it may be involved in the regulation of sugar accumulation in blueberry fruit. It was previously reported that vacuolar invertase modulates the hexose to sucrose ratio in ripening fruit. In addition, there are also two sugar transport protein homologs that exhibited developmental specific expression. However, their function remains largely unknown, thus, their potential role in sugar accumulation in the developing berry requires further investigation.Tandemly duplicated genes arise as a result of unequal crossing over or template slippage during DNA repair , exhibit high birth-death rates, and typically are in co-regulated clusters in the genome. Smaller scale duplications, which include tandem duplicates, are highly biased toward certain gene families including those involved in specialized metabolism. Furthermore, tandem duplications often results in the increased dosage of gene products and may improve the metabolic flux of rate limiting steps in certain bio-synthetic pathways. Most genes associated with the biosynthesis of antioxidants have at least one tandem duplicate present in the high bush blueberry genome, with tandem array sizes ranging from 2 to 10 gene copies . The largest tandem arrays were found for HQT and HCT genes, which are co-regulated and involved in the CGA pathway . Differences in tandem array sizes were also observed between homoeologous chromosomes for various genes. For example, the C3H gene, which is involved in CGA biosynthesis , was present on all four homoeologous chromosomes but with varying tandem array sizes.

One of the homoeologous chromosomes had two copies of C3H, while the other three homoeologous chromosomes had four copies. This suggests that copy number differences of C3H among sub-genomes may be due to either selection for gene duplication or loss or, in the case of allopolyploidy, may be due to preexisting gene content differences among the diploid progenitor species. Genes in the anthocyanin pathway with other unique duplication patterns include CHS, CHI, OMT, and UFGT. The gene CHS, involved in the conversion of 4-coumaryl-CoA to naringenin chalcone, has two copies, and both have tandem duplicates in at least three of the homoeologous chromosomes. Interestingly, the gene CHI has a single preserved tandem gene duplicate on only one of the homoeologous chromosomes. However, additional copies of CHI were also identified more distantly away from the syntenic ortholog on another homoeologous chromosome, likely involving a transposition event following tandem duplication. The OMT and UFGT genes all have tandem duplicates on all of the homoeologous chromosomes, although with varying array sizes, while the ANR gene involved in the conversion of anthocyanidin to proanthocyanidin is single copy on all homoeologous chromosomes. DFR gene, which is involved inthe conversion of dihydroquercetin/dihyromyricetin to leucoanthocyanidin, has a single tandem duplicate on only one of the homoeologous chromosomes. These findings suggest that there may have been greater selective pressure to retain tandem duplicates for genes encoding enzymes involved in anthocyanin production than conversion to proanthocyanidins. The vast majority of tandem duplicates are eventually lost ; however, in rare instances, some may undergo functional diversification. Gene expression analysis revealed that 83.4% of the tandem duplicates were expressed in at least one transcriptome library with 73.5% expressed in at least one of the fruit developmental stages. This suggests that a subset of these duplicate genes have non-functionalized, subfunctionalized, or neofunctionalized. Future studies are needed to more thoroughly investigate the functions of these genes with more diverse libraries and additional transcriptome analyses.Despite the economic importance of blueberry, molecular breeding approaches to produce superior cultivars have been greatly hampered by inadequate genomic resources and a limited understanding of the underlying genetics encoding important traits. This has resulted in breeders having to solely rely on traditional approaches to generate new cultivars, each with widely varying fruit quality characteristics. For example, our analysis of a diversity panel consisting of 84 cultivars and wild species revealed that ”Draper” has antioxidant levels that are up to 19x higher than other cultivars. Thus, the genome of ”Draper” should serve as a powerful resource to the blueberry community for guiding future breeding efforts aimed at improving antioxidant levels among other important fruit quality traits. Furthermore, to our knowledge, this is not only the first genome assembly of the cultivated high bush blueberry but is also the first chromosome-scale and haplotype-phased genome for any species in the order Ericales. Ericales includes several other high-value crops and wild species with unique life history traits . Thus, we anticipate that this reference genome, plus associated datasets, will be useful for a wide variety of evolutionary studies. Here, we also leveraged the genome to identify candidate genes and pathways that encode superior fruit quality in blueberry, including those associated with pigmentation, sugar, and antioxidant levels. Furthermore, we found that genes encoding key bio-synthetic steps in various antioxidant pathways are enriched with tandem gene duplicates. For example, tandem gene duplications have expanded gene families that are involved in the biosynthesis of anthocyanins. This suggests that, in addition to a recent whole genome duplication, tandem duplications may have greatly contributed to the metabolic diversity observed in blueberry .

The content of bio-active compounds in plant foods is highly influenced by genetics

The interaction between bacteria and epithelial cells elevates inflammation, leading to increased thinning of the mucus and direct host-bacteria interaction. The thali approach, however, combats this cycle in two different ways: by suppressing bacterial growth with anti-microbial phytochemicals , and by reducing the opportunity for inflammation to occur. One molecular pathway involved in such a cycle involves interleukin 6 . This cytokine is normally expressed during acute inflammatory responses, and among other effects, upregulates the transcription factor STAT3. In the nucleus, STAT3 promotes cell proliferation and differentiation as well as upregulating anti-apoptosis genes. When IL6 is chronically elevated, it can lead to an apoptosis-resistant, constantly expanding T-cell population in the intestinal mucosa. These cells can further contribute to chronic inflammation. Just as a certain diet may promote chronic inflammation, a change in diet can help to restore health. Various bio-active compounds, including anthocyanins, have demonstrated antioxidant activity, reducing local amounts of reactive oxygen species. Low levels of reactive oxygen species can lower the expression of some inflammatory genes, including IL6, and relieve the stresses on both the intestinal microbiota and epithelial cells caused by chronic inflammation. In a study of pigs, we found that supplementing a high-calorie diet with purple potatoes that contains anthocyanins led to a six-fold reduction in levels of interleukin-6 compared to high-fat diet control. Colorectal cancer killed nearly 774,000 people worldwide in 2015,hydroponic vertical garden and nearly an estimated 50,630 deaths in 2018 in America making it the third leading cause of cancer-related deaths in the United States in women and second in men.

Virtually all cases of CRC are considered to result from an interplay of exogenous and endogenous factors with respect to the variable contribution from each factor. Some non-modifiable risk factors include old age and family history of CRC. Other risk factors, however, are associated with lifestyle or behaviors and thus can be changed. These modifiable risk factors include smoking, obesity, low physical activity, deficiency of dietary fiber, deficiency of vitamin D, deficiency of folate, high intake of red and processed meat, and alcohol consumption. Some of these risk factors, however, are closely related. For example, inadequate fiber intake and excessive fat intake are dietary risk factors which tend to lead to a lack of exercise which ultimately may contribute to obesity, particularly in combination. In the US, 40 percent of adults are obese, and so the risk factors discussed are common mainly due to the modern Western lifestyle. Therefore, it is no surprise that nearly half of the CRC cases arise in the developed nations. The Western diet in its current form contains more risk factors than the calorie and fat content. Foods that contain heterocyclic amines , polycyclic aromatic hydrocarbons , and emulsifiers can also contribute to carcinogenesis. HCA and PAH are produced in meats when they are fried or grilled over an open flame. These substances have been proved to damage the DNA of colonocytes and potentially promote risk of colon cancer. Emulsifiers are used in foods like ice cream to ensure an even distribution of fat molecules. Recent evidence suggests, however, that emulsifiers promote intestinal inflammation, creating an environment that favors colon carcinogenesis in mice. Some of these risk factors, however, are closely related. For example, inadequate fiber intake and excessive fat intake are dietary risk factors. These tend to lead to a lack of exercise, which ultimately contributes to obesity.

In the US, 40 percent of adults are obese, and so the risk factors discussed are common mainly due to the modern Western lifestyle. Therefore, it is no surprise that nearly half of CRC cases arise in developed nations. However, colon cancer has a long development period . This gives ample time for lifestyle changes to take place, including diet-based intervention. Chronic inflammation, a condition that is promoted by dietary risk factors also contributes to the development of cancer, even in humans. Patients with inflammatory bowel disease have a significantly increased risk of developing CRC, while long-term aspirin treatment is associated with a significantly decreased risk of CRC. The mechanisms by which chronic inflammation promotes tumor development often involve the immune system. For example, the IL6/STAT pathway discussed earlier is also implicated in cancer formation. Over expression of IL6 leads to excess STAT3 transcription, causing unwanted cell proliferation not only in T cells but also in the intestinal epithelium. Another inflammatory cytokine of note is TNF α. While the intestinal bacteria can promote inflammation, they may also affect the likelihood of CRC more directly. Once the intestinal mucus layer is thinned, and direct bacterial-epithelial cell interactions occur, certain bacterial strains promote tumor development. E. coli strains bearing the pks island are of particular interest. This genetic locus codes for the secondary metabolite colibactin, along with the enzymes necessary for its production. Colibactin has been shown to crosslink with DNA, producing double-stranded breaks. Furthermore, pks+ E. coli strains have been shown to be prevalent in CRC patients. In one study, nearly two-thirds of CRC patients had pks+ E. coli strains in their intestinal bacteria. In the same study, pks+ E. coli also existed in about 20 percent of healthy individuals. Colibactin, however, is a reactive and short-lived protein, requiring close contact with epithelial cells to cause DNA damage. A healthy mucosal barrier keeps colibactin at a distance and reduces the chance of affecting the intestinal epithelium. Evidence for the pathogenic relationship between diets, Fusobacterium nucleatum, and CRC has been emerging.

The F. nucleatum levels have been shown to be higher in CRC than in adjacent normal mucosa. Utilizing the molecular pathological epidemiology paradigm and methods, a recent study has shown the association of fiber-rich diets with decreased risk of F. nucleatum-detectable CRC, but not that of F. nucleatum-undetectable CRC. Experimental evidence supports a carcinogenic role of F. nucleatum, as well as its role in modifying therapeutic outcomes. The amount of F. nucleatum in CRC tissue has been associated with proximal tumor location, CpG island methylator phenotype , microsatellite instability, low-level CD3+ T cell infiltrate, high-level macrophage infiltration, and unfavorable patient survival . The amount of F. nucleatum in average increased in CRC from rectum to cecum, supporting the colorectal continuum model. Future studies should examine the role of diets, microbiota, and CRC in detailed tumor locations. Dietary prevention of CRC, then, has two intertwined aims: to reduce inflammation and to promote a healthy intestinal microbiota. As already discussed, preclinical evidence implies that dietary bio-active compounds, particularly anthocyanins, can reduce symptoms of low grade chronic inflammation as well as oxidative stress. It can also aid in balancing the intestinal microbiota by promoting the growth of beneficial bacteria and by reducing the populations of pro-inflammatory bacteria. Clinical trials have had mixed results, but anthocyanins and some polyphenols have shown to counteract against CRC actively. More research, however, is necessary for conclusive results. How, then, are individuals to consume enough bio-active compounds to have an effect on health? Some answers may be found in the food consumption practices of cultures with historically low CRC incidence. Parts of India, for example, have had some of the lowest CRC incidence rates in the world; however, this status has been changing. In recent decades,vertical vegetable tower increasing urbanization and similar factors have led to progressively Westernized diet patterns and lifestyle. CRC incidence rates are similarly rising, lending weight to the hypothesis that the traditional Indian diet may help prevent CRC. Furthermore, Indian immigrants to Western countries have a much higher incidence of CRC compared to Indians in India. Typical components of traditional Indian meals include a broad variety of flavors, as promoted in Ayurvedic medicine, and a variety of other foods. Both are facilitated by using a thali platter to serve the meal. The traditional American main meal includes an entree , one or more carbohydrates , and one or more vegetables. This basic structure can potentially be adapted with inspiration from thali meals by reducing the size of the main dish and serving more vegetables, legumes, pulses, herbs, and spices to accompany it. A unique component to thali is the combination of many tastes and colors. The inclusion of multiple colors in a meal is desirable, because certain bio-active compounds, particularly anthocyanins are also pigments. Blue, purple, and red-purple colors in plant foods indicate high anthocyanin content. Purple-pigmented potatoes can be prepared in the same way as traditional white potatoes, but the anthocyanin content is significantly higher in the pigmented varieties. Purple sweet potatoes also contain more anthocyanins than the more common orange varieties and can be easily substituted for them. Other vegetables with red or purple cultivars include carrots, cauliflower, and cabbage. Different colors can indicate the presence of other bio-active compounds, such as orange , yellow , and red/pink . Thus, healthy bio-active compound consumption may be increased by selecting colorful vegetables. Another way to increase consumption of bio-active compounds is to increase their presence in available foods.

The agricultural industry could greatly impact health by adopting food plant cultivars that produce bio-active compounds in larger amounts than is currently common. New cultivars may need to be developed that retain desirable characteristics such as large size, pest resistance, reduced spoilage, etc., but also have high bio-active content at the time of consumption. bio-active compounds, with some exceptions, tend to deteriorate during storage. Even when compounds have not deteriorated, storage may reduce the anti-inflammatory/antioxidant activity of bio-active compounds to affect health. A second systemic change that would promote increased bio-active compound consumption involves reworking how fruits and vegetables are currently stored and processed, as well as reducing the average storage time and adapting processing to optimize the amount of bio-active compounds. Presently, “nutritional adequacy” does not consider many of the bio-active compounds discussed in this paper. Further clinical studies are needed to support and elucidate the role of bio-active compounds in the prevention and treatment of disease.Natural competence is a phenomenon that allows bacteria to take up DNA segments from the environment and incorporate them into the genome via homologous recombination . Natural competence was first demonstrated in Streptococcus pneumoniae in 1928 by Frederick Griffith . Griffith showed that virulence genes were transferred from donor to recipient cells, converting the nonvirulent recipients into virulent pathogens . Since then, 80 bacterial species in divergent phyla have been described as naturally competent . Although the exact reasons for occurrence of natural competence in bacteria still remain unknown, studies showed that natural competence is induced under conditions of starvation and DNA damage , and it has been hypothesized that the incoming DNA serves as a food source and DNA repair material. Another proposition is that natural competence allows acquisition of new genes and alleles, providing the recipient cells with adaptive advantages. In fact, a previous study showed an increased rate of adaptation by natural competence in Helicobacter pylori . Interestingly, natural competence has been demonstrated in some of the most highly diverse and successful human pathogens such as H. pylori , Neisseria meningitidis and Neisseria gonorrhoeae , and Porphyromonas gingivalis , which require rapid adaptation to evade the immune response. Furthermore, natural competence also was described in two plant pathogens, Ralstonia solanacearumandXylella fastidiosa , both of which have very broad plant host ranges. Xylella fastidiosa is a bacterial pathogen affecting many economically important crops, such as grape, citrus, coffee, peach, and almond . The disease process is not completely understood, but it is proposed that X. fastidiosa forms biofilm-like aggregates and blocks xylem vessels, the conduits for water and nutrient transport in the plants . This blockage hinders xylem sap flow and starves the upper aerial parts of water and mineral nutrients, producing symptoms that resemble those of water and nutrient deficits. X. fastidiosa is transmitted by a number of xylem sap-feeding insects, including sharpshooter leaf hoppers and spittlebugs in which X. fastidiosa forms biofilms in the foregut . Taxonomically, X. fastidiosa is divided into five subspecies based on multilocus sequence typing . Even within the subspecies, host range and genotype diversity have been described , and recombination events among strains have been detected among field-collected samples . In fact, homologous recombination was shown to have a greater effect in generating genetic diversity in X. fastidiosa than point mutation .

PVWMA visually inspects and records land use on an annual basis

The boundaries of the DWZ are shown in Figure 3.2. Only users within this zone have access to the alternative water supplies. This region was targeted because the negative externality that groundwater pumping imposes is larger for growers directly on the coast than for growers further inland. Moreover, underlying hydrologic characteristics of the aquifer mean that groundwater pumping in the southern part of the region has a greater externality than in the north. The eastern boundary of the DWZ is Highway 1, rather than a particular aquifer feature. The benefits that the recycled water has in the DWZ are threefold: the higher quality water allows growers with saline groundwater to improve their crop yields, the alternative water supply reduces pumping on the coast, and the runoff from the application of this water helps to recharge the aquifer. For most of the groundwater irrigation in the Pajaro Valley, growers bore individual tube wells on their property, rather than using canals or a shared water conveyance system. With the development of the recycled water program, the Delivered Water Zone needed a network of pipes, called the “Coastal Distribution System” to move the recycled water to eligible growers. Construction began on the CDS in 2005, and has slowly increased over time. As of 2020, the CDS is approximately 20 miles long,tower garden and provides water to 5100 of the most severely affected agricultural acres. A map of the Coastal Delivery System can be found in Figure 3.3. Along the CDS, turnouts , are installed in order to provide access to growers.

In order for a grower in the DWZ to receive recycled water, the CDS needs to reach their parcel and have a turnout, the grower needs to submit an application, and there must be enough recycled water to meet both the needs of the current users and of the applicant. Recycled water is sourced from the Watsonville Recycled Water Facility, which is a treated urban wastewater facility. It began operation in 2009. In the first full year of operation, the recycled water facility supplied 2700 acre-feet, but the facility has capacity for up to 6000 AFY, and plans have been approved to expand the facility further. While the recycled water is the main source of delivered water, there is some water available from the Harkins Slough Recharge and Recovery Facility. This facility intercepts some of the surface outflows from the Harkins Slough, which are wetlands just south of the Pajaro Valley. If not redirected for use in the valley, the outflow would have run into Monterey Bay, mixing with seawater. This storage facility has been in existence since 2002, and was the first groundwater recharge project constructed by the water management agency. While PVWMA has a permit to pump 2000 AFY from the Harkins Slough, the reality has been closer to 1000 AFY , due to a lack of flow through the Slough and the limited capacity in the recharge pond. Since recycled water comes into contact with crops, proper treatment of the recycled water is paramount. In order to meet California’s stringent recycled water standards, the water is tertiary treated, which means that all solids larger than 10 microns are removed, and the water is treated with UV light to kill pathogens. Some salts, nitrates, and phosphates may remain, but the quality is high enough to be directly applied to agricultural products, and safe enough to enter the aquifer for household use. The average total dissolved solids levels in recycled water is approximately 600 mg/L, which is high enough to cause some damage to salt-sensitive agricultural products, but much lower than TDS levels under drought conditions or in seawater-intruded wells.

To ensure that salt contents are sufficiently low, the recycled water is also blended with water from inland wells. To generate revenue to support the program, PVWMA collects augmentation fees for delivered water and fees for groundwater pumping in the basin4 . The pricing of both groundwater and recycled water began in 2002, and a tiered pricing system was established in 2010. A snapshot of 2016-2021 water prices, by category, are found in Table 3.1. The price of water varies in the Pajaro Valley depending on where the water is sourced , if the well is metered or unmetered , and if the well is within the delivered water zone. While fees for recycled water are higher than the cost of groundwater pumping in the DWZ, the fees are structured specifically such that when one factors in the electricity costs of pumping groundwater, the recycled water is slightly cheaper. To price groundwater, PVWMA meters all wells capable of extracting 10 AFY, as well smaller wells, if they serves 10 acres of orchard, 4 acres of berries or row crops, or 2.5 acres of greenhouse facilities. Municipal, agricultural, and industrial wells make up 87% of water use, while rural residential wells make up 2%, and the rest is consumed by delivered water users. Few residential wells have meters, so they are estimated to use 0.5 AFY, and are charged based on that estimate. Pajaro Valley’s water prices are high, relative to other groundwater charges. In most of the United States, groundwater pumping is not metered, and water prices are merely the electricity costs required to operate the pump. Even in locations where water prices have been implemented, they tend to be significantly lower than the prices in Pajaro Valley. In California’s productive Central Valley, water prices are commonly between $70-150 per acre-foot, and the 2018 Farm and Ranch Irrigation Survey finds that California growers pay an average of $67 per acre-foot for “off-farm” water. However, there are some regions facing similar or much higher water prices, depending on water supply constraints. Growers in San Diego county, for example, pay $1700 an acre-foot, due to water scarcity. Moreover, in Pajaro Valley, the irrigation water costs are minor when compared to the revenue and profits for the crops grown in the region. On average, revenues are $34,000 an acre, and reach up to $68,000 per acre for strawberries. The combination of high revenues and low water requirements leads me to believe that growers are not deficit irrigating in response to the water prices.

The Pajaro Valley is known for its production of delicate, high value produce, including strawberries, apples, raspberries, blackberries, artichokes, grapes, lettuce, and a variety of vegetables and herbs. As of 2019, total production value in the region was over $1 billion across 28,500 irrigated acres. The major California berry producer Driscoll’s is headquartered in the region, as is the cider producer Martinelli’s. The temperate, coastal climate is ideal for the production of these crops. Moderate temperatures year-round, sunny days, and foggy nights are excellent growing conditions for sensitive crops. However, the delicate nature of this produce means that they are also susceptible to other challenges, such as salinity damage. Salinity damage impacts almost all stages of plant growth and development,stacking flower pot tower including germination, vegetative growth, and reproduction . These effects lower crop yields and economic returns. For salinity in irrigation water, damages rarely occur until salinity reaches a crop-specific, critical “threshold”. Then, crop yields decline linearly as salinity levels rise. The threshold at which salinity damages begin to occur varies significantly, depending on the crop. For example, strawberry yields begin to decline at TDS levels of around 450 mg/L, while zucchini may not decline in yields before TDS levels reach 2000 mg/L. Grattan estimates and compiles these thresholds and yield declines for a variety of crops grown in California. Figure 3.4 depicts the relationship between irrigation water salinity and yield for a sub-sample of crops in the Pajaro Valley. Since crop revenues are so high for these products, even minor yield declines can lead to significant losses. In 2020, strawberry revenues were around $68,000 per acre, raspberries yielded around $59,000/acre, and apple revenues were $9,800 an acre. A yield loss of 10%, which would correspond to a TDS increase of 128 mg/L for strawberries, decreasing their revenue by $6,800 an acre. Therefore, growers are motivated to find possible solutions to deal with salinity issues in their groundwater, although individual basin management is out of their control. An alternative water source, such as recycled water, with moderate salinity levels, can mitigate severe crop losses while also preventing further seawater intrusion. Data provided from Pajaro Valley Water Management Agency for this analysis consist of water quality measurements from the network of monitoring wells, quarterly pumping and recycled water deliveries, depth to groundwater contour maps, and annual land use data. The details on how these data are built into a parcel-level panel are below. Additionally, I bring in variables on temperature and precipitation, property boundaries and ownership, and crop prices and revenue. Summary statistics are presented in Table 3.2. Pajaro Valley Water Management Agency has been collecting water quality data in the basin since 1957, and has built a network of 286 monitoring wells.

The locations of these monitoring wells are depicted in Figure 3.5 as black dots, overlaid on top of all the metered wells in Pajaro Valley . These monitoring wells are typically sampled twice annually, once in spring and once in the fall . This sampling method captures water quality at two critical time periods: spring is before the primary irrigation season, after winter rains and when water tables are the highest, and fall is after the main irrigation season, when water tables are the lowest. While PVWMA takes multiple salinity measurements, I use the total dissolved solids measurement, as it is generally the most salient to growers5 . For both fall and spring of each year, I take all water quality measurements of TDS and use an inverse distance weighting technique to interpolate a map of water quality for the entire Pajaro Valley. In the analysis, I focus on spring TDS, given that salinity before the growing season is considered to be the most important for agricultural water users, and is the most likely to predict summer basin conditions for growers. Figure 3.6 shows the history of average spring TDS values spanning 2003-2020, highlighting seasonal salinity patterns in the basin. Averages for the delivered water zone and the rest of the Pajaro Valley are compared. For the full region, spring TDS levels averaged 645.9 mg/L and ranged from 272.4 mg/L to 17,103.7 mg/L. The dashed line at 600 mg/L represents the approximate average TDS level of the recycled water. As can be seen, average salinity levels in the basin are frequently lower than the TDS levels of the recycled water, except in years of very high salinity. The spikes in salinity, which are especially high within the delivered water zone, are largely caused by drought conditions: groundwater pumping stays relatively stable, but the lack of precipitation leads to less groundwater recharge. With less freshwater percolating through to the aquifer, TDS levels in the remaining water are higher, and seawater intrusion is more likely to occur. As outlined above, there is significant variation in salinity across time. Importantly for our analysis, there is also spatial variation in salinity. This spatial variation is largely driven by inherent underlying characteristics of the aquifer, as well as distance to the coast and surface water sources. Parcel characteristics, such as soil properties, slope, and land elevation also play a role. Figure 3.7 shows average salinity levels across the Pajaro Valley basin from 2009-2020, plotted by decile. This figure indicates that inland regions, aside from those located near the Pajaro River, experience significantly lower levels of salinity, especially towards the south. Notably, the coastal region just north of the delivered water zone experiences some of the highest TDS levels, providing some initial evidence that recycled water may be having an impact within the delivered water zone. An impressive feature of the data from Pajaro Valley are the data on annual land use, which covers the 2009 and 2011-2020 growing seasons. PVWMA also engages in quality control practices, including randomly sampling parcels for additional checks. These ground-truthed land use data have key advantages over satellite data, which is known to have substantial error in measuring land use among California’s unique crop set . Agricultural land use types include vegetable row crops, strawberries, blackberries and raspberries , vine crops, artichokes, orchards, nursery crops, greenhouses, fallow ground, cover crops, and unknown agricultural use.

The theoretical research of FSL is in the stage of rapid development at present

Our main contributions in this work are two-folds: we propose to merge the CMSFF in the backbone network to enhance the feature representation, and combine the CA to focus on the informative channels; we propose a group of training strategies to match the different generalization scenarios. Although FSL is very suitable for plant disease recognition, the applications of smart agriculture have just begun . In this research direction,there are still huge potential space needed to explore. In here, we discuss the limitations of this work and some future works. 1. Multi-disease. The PV and AFD used in this work as target data which have a common characteristic that only single disease is included in per image. In fact, once a plant is infected by the first disease, it is easily infected by other diseases because the immune system is attacked and becomes weak . Multiple diseases occur in a plant is more common in the real field condition. But the combinations of different diseases are too many to collect sufficient samples for each category from classification perspective . The current researches prefer to solve this problem by semantic segmentation. We do not cover this challenging problem due to limitations of data resources in this work. 2. Formulation of meta-learning data. The samples of PV were taken under controlled condition , which have a clean board as the unified background,dutch buckets for sale the illumination is under controlled, only single leaf in per image, only single disease occurs in per leaf. The settings are simple and very different from the in-wild conditions.

That is the reason many researches already achieved high accuracy by using deep learning CNNs on PV . But the samples of AFD were taken under in-wild condition, which have complex surroundings. When testing with AFD, we use PV in meta learning, mainly considering that both datasets are about plant diseases. Since we did not find any other appropriate dataset, the degree of similarity of the data used in training and test was not taken in account. According to our hypothesis, the degree of similarity of data used in meta-learning and test is higher, the adapting is easier, and the result would be better. It is demonstrated that the selection of meta-learning data is critical in this pipeline. The data used in meta-learning stage should be determined by the target. When the application scenarios cannot be predicted, how to formulate an appropriate meta-learning dataset is worthy to study. Inspired by Nuthalapati and Tunga and Li and Yang , the effectiveness of a mixed dataset for meta learning will be considered. 3. Sub-class classification. For the application of plant disease recognition, it is more meaningful to distinguish the diseases belonging to the same species. What farmers need more than anything else is a diagnostic assistant that can identify similar diseases belonging to the same plant. Although sub-class classification is difficult , it is an inescapable work in plant disease recognition and the performance is needed to be improved urgently. Fine-grained features of the lesions being the distinguishable features to solve this issue. In this direction, lesion detection and segmentation, fine-grained visual classification are involved. 4. The quality and quantity of training data. Most of the current researches of FSL deal with the configuration of data used in test, but very little work has concerned the data used in training. The common sense is that deep learning networks rely on large-scale data. However, a new direction is discussing the quality and quantity of training data recently . These works indicate that part of data can achieve at the same performance as full data.

Date quality can be assessed, which can guide to establish a dataset with enough diversity data while without redundant samples. The networks of appropriate depth using good data can achieve optimal results in many traditional CNN classification tasks. In this work, we use large-scale data in base-training and meta-learning. The quantity of data follows the conventional settings for comparison purposes. The data quality assessment work is not involved in this work. For the specific topic of plant disease, the data quality is very important. We know that at different stages of development of plants and diseases, the symptom appearances are very different. How to construct a comprehensive set without redundant data to represent a disease is a valuable work in the future . 5. Cross-domain. The significance of cross-domain has been introduced in prior sections. We emphasize cross-domain again because it is common when we cannot predict the species, surroundings, and photo conditions in test. In this work, we consider it from training strategies. There are many aspects to explore in future work, such as network architecture, feature distribution calibration etc.The Pajaro Valley is a productive agricultural region located on the Central California coastline, spanning across portions of Monterey and Santa Cruz counties. It provides an excellent study region for examining the implications of seawater intrusion on agriculture. Seawater intrusion in the region is well-documented, severe, and increasing over time. The local water management agency has done a rigorous job in tracking changes in salinity and agricultural production, as well as monitoring groundwater use. In addition, the region has engaged in large-scale mitigation strategy efforts, with the use of municipal treated wastewater. While the Pajaro Valley has experienced more severe seawater intrusion issues than most coastal agricultural regions, it is also an early adopter of recycled water as an alternative water source. Understanding the dynamics of seawater intrusion and management in this region can provide important implications for other regions wrestling with salinity problems under climate change.

In the Pajaro Valley, approximately 30% of the land and 85% of total water consumption is used for agricultural production. Due to the temperate Mediterranean micro-climate, Pajaro Valley has some of highest valued land in the country. In 2019, the crop revenue generated by the Pajaro Valley was approximately $1 billion across 28,500 irrigated acres . The region is well known for a variety of produce, including strawberries, raspberries and blackberries , apples, artichokes, and vegetable and nursery products. The major companies Driscoll’s and Martinelli’s are headquartered in the valley. Many of the crops in the Pajaro Valley require a significant amount of water for production, with most requiring between 2-3 acre ft. With virtually no access to surface water , irrigation water is sourced.Groundwater is the primary source of water for the entire basin, making up 93% of the water used in 2020. In fact, the Central Coast relies mostly on groundwater for agriculture,hydroponic net pots although a few farmers receive water through surface sources, the State Water Project and the Central Valley Project . Less than one percent of the Pajaro Valley’s water supply came from surface sources in 2020 . . On average, total annual groundwater use from 2010-2020 typically ranged from about 50,000-55,000 acre-feet per year , although this increased up to 60,000 AFY during the height of the 2013-2015 drought . Groundwater pumping in the Pajaro Valley is nearly twice the sustainable yield of the basin annually, which is defined as the quantity of water that enters the basin, through agricultural runoff and precipitation. By the 1940s, groundwater depletion was significant enough for growers to adopt deep well turbine pumps from the oil industry in order to reach the groundwater . Artesian wells, which were prevalent until this era, started to be artesian only during winter . The installation of the tube wells has led to an additional, significant groundwater concern: that of seawater intrusion. Seawater intrusion is the process of ocean water entering groundwater tables, contaminating freshwater resources. Many factors contribute to saltwater intrusion, including irrigation wells, excess pumping of groundwater, climate change, and sea-level rise. Mechanically, seawater intrusion works across four major dimensions. In the Pajaro Valley, the primary movement of seawater into freshwater is lateral . When a groundwater aquifer falls below sea-level, it creates a landward gradient where the dense seawater moves horizontally into the freshwater. Secondly, major storms and coastal flooding lead to seawater inundating nearby land, resulting in seawater percolating through the soil and leaching into the underlying groundwater. Additionally, seawater can enter coastal groundwater aquifers from below, since groundwater commonly sits directly on top of seawater, with only the relative density difference separating the two water bodies.

The use of tube wells in the freshwater aquifer leads to pressure changes, where the resulting “cones of depression” allow seawater to mix upwards into the freshwater aquifer. Finally, sea-level rise intermingles with seawater intrusion in multiple ways: by increasing the frequency and severity of coastal flooding, and by increasing the extent of the seawater “toe”, or how far inland the seawater sits below the groundwater aquifer. Altogether, seawater intrusion is a complex, dynamic system that is difficult to combat once in motion. Seawater intrusion was first noticed in the Pajaro Valley in 1951, with the extent of seawater intrusion increasing seven-fold since its discovery. However, in years of high rainfall, groundwater levels were historically high enough to prevent significant seawater intrusion. Simulations from the Pajaro Valley Hydrologic Model suggest that before the 1984- 1992 drought, groundwater levels only dropped below sea level during drought years. Since 1984 however, the groundwater level has been on a largely continuous decline . In 2010, the Pajaro Valley Water Management Agency reported that long term rates of saline intrusion are about 200 ft/year, and intrusion renders 11,000 acre-feet of water unusable annually. One-half of the groundwater table is below sea-level year round, and two-thirds is below sea level after irrigation season in the fall. Even with fluctuations in rainfall, in much of the Pajaro Valley, today the groundwater table remains consistently below sea level . Seawater intrusion is typically measured by the concentration of chloride present in a water body. However, for agricultural purposes, chlorides affect crops and yields in the same ways as other salts that may be present in irrigation water. Total salt content is generally measured using electrical conductivity and Total Dissolved Solids . Irrigation water with a salinity value of less than 500 mg/L TDS is the objective for irrigated agriculture. Strawberries, however, are a particularly salt-sensitive crop, with yields beginning to decline at TDS values of 450 mg/L . Irrigation water that has high TDS levels can lead to root and foliar absorption, negatively impacting crop yields1 . The relationship between irrigation water salinity and crop yields is depicted in Figure 3.4. Plants can typically tolerate salinity in irrigation water up until a crop-specific threshold, at which point yields decline linearly. Additionally, irrigation water that is high in sodium can lead to a loss of soil permeability, especially for soils with a lot of clay . While salinity issues especially impact the coast of the Pajaro Valley, TDS levels vary significantly across the region. As discussed above, there are many channels for seawater intrusion, and transport of water between aquifer layers is possible. Groundwater will move from areas of high to low pressure, through naturally occurring gaps, vertically, or through well bores. The Murphy Crossing area, on the eastern side of the Pajaro Valley, contains especially high levels of total dissolved solids . The highest chloride levels tend to occur in aquifers consisting of Aromas Red Sands and the Purisima geologic formation,with values from fewer than 5 mg/L to 14,600 mg/L. The average total dissolved solids levels across the Pajaro Valley, from 2003-2020, are shown in Figure? Salinity also varies significantly over time, due to changes in precipitation and groundwater use. There are other pollutants that lead to water quality concerns in the Pajaro Valley, including nitrates and phosphates. However, while nitrates and phosphates are of concern to human and environmental health, they do not have a negative impact on crop yields. The Central Coast Regional Water Quality Control Board has water quality objectives for its irrigation supplies . Nitrate contamination is largely due to fertilizer, while the source of salts is primarily saltwater intrusion, although seawater also contains nitrates. Therefore, while these contaminants are essential to keep track of, for agricultural producers, the concerns are negligible.While seawater intrusion had been discovered in the Pajaro Valley in the 1950s, broader management did not take place until a California-wide drought in the late 1970s spurred statewide action. In 1977, the Governor’s Commission to Review Water Rights in California was created, and their report contained recommendations to improve groundwater management and overdraft.

The standard unit of measurement is milliliters of blood per 100 grams of brain tissue per minute

Many human intervention studies have therefore focused on middle-aged and elderly adults at increased vascular risk who are also known to be at increased risk of cognitive impairment and dementia, allowing for improvement by lifestyle-based intervention strategies.CBF is defined as the volume of arterial blood delivered to a unit mass of brain tissue per unit of time. The different imaging techniques to assess brain perfusion will be mentioned briefly, as they have already been critically reviewed. The direct methods discussed below have been developed to measure the delivery of arterial blood to the capillary bed. A frequently observed value in human gray matter is about 60 mL/100 g/min, corresponding to the delivery of approximately 1 mL of blood to 100 g of brain tissue per second. Assuming an average brain tissue density of 1 g/mL, this means that approximately 1% of the total tissue volume is provided with freshly delivered blood each second. In earlier studies, radioactive tracers were used to measure the absolute blood flow in the brain. However, CBF measurements using radio tracers require a specialized imaging unit, and time intervals between repeated measurements are also required to minimize overexposure to radiation. These delays significantly reduce the usefulness of radioactive CBF measurements in human intervention studies. Therefore,dutch bucket hydroponic increasing attention has now been directed to recent developments in magnetic resonance imaging that enable the non-invasive measurement of cerebral perfusion in human volunteers.

Direct methods for measuring CBF in human subjects include, but are not limited to, single-photon emission computed tomography , positron emission tomography , MRI with contrast agents, and arterial spin labeling MRI. All these methods are based on the measurement of the amount of a tracer delivered to the human brain tissue by blood flow. PET using injection of 15O-labeled water radio tracers is still considered the gold standard approach. Important limitations, however, include the need for an on-site cyclotron and the invasive nature and complexity of the measurement. As a promising alternative, ASL is a relatively new, non-invasive MRI method that uses magnetically labeled water molecules in arterial blood as a tracer. This method is currently available on MRI systems produced by the major manufacturers. The general principles have been described in detail before. Briefly, this non-invasive measurement of CBF works by manipulating the magnetic resonance signal of in flowing blood in feeding arteries before it is delivered to the capillary bed of the different areas in the brain. Separate “label” and “control” images are acquired, and the resulting signal difference can be scaled to yield highly repeatable quantitative measures of CBF. Figure 2 shows an example of an ASL CBF map. Recently, human studies that performed both PET and ASL MRI to measure brain perfusion were systematically reviewed. It was concluded that ASL is a promising method for accurate and reproducible CBF measurements, and comparative studies show that ASL is a validated method for non-invasive perfusion imaging in humans. Dietary nitrate, which is found in high concentrations in red beetroot, lettuce, and spinach, may improve CBF through beneficial effects on vascular endothelial function, which is an important mechanistic determinant of cerebral perfusion. In the mouth, dietary nitrate can be reduced to nitrite by facultative bacteria from the dorsal surface of the tongue.

Once in the blood, nitrite can be further converted into nitric oxide in the human vasculature, thereby improving endothelial function via an increased NO bio-availability. Several human intervention studies have examined the acute effect of dietary nitrate intake on measures of CBF . Presley and colleagues measured cerebral perfusion using ASL MRI after administering a high versus low-nitrate diet for 24 h to a group of elderly humans . The test diet included beetroot juice and provided 773 mg of nitrate compared to 5.5 mg for the low-nitrate diet. The authors demonstrated that the diet high in dietary nitrate did not significantly increase global CBF, while regional cerebral perfusion improved in frontal lobe white matter, especially between the dorsolateral prefrontal cortex and the anterior cingulate cortex, which are known to be involved in executive functioning. However, whether the observed increase in regional CBF in coincides with concurrent improvements in cognitive functioning remains to be elucidated. A single dose of 500 mL nitrate-rich beetroot juice acutely increased MCA mean blood flow velocity, measured non-invasively with transcranial Doppler ultrasonography during submaximal exercise in twelve healthy, normotensive young adult females. More recently, Wightman et al. investigated the acute effects of 450 mL of beetroot juice on prefrontal cortex CBF parameters in 40 apparently healthy adults. It was found that a single dose of beetroot juice modulated the NIRS-monitored CBF hemodynamic response during the performance of tasks, which activated the prefrontal cortex. In fact, an initial rise in prefrontal cortex perfusion at the start of the task period was followed by consistent reductions in cerebral perfusion during the least demanding task, while performance on one of the three cognitive tasks—requiring resources in terms of working memory, psychomotor speed and executive functioning—was improved.

Polyphenols are predominantly found in fruits and vegetables, as well as red wine, tea and chocolate. These phytochemicals may exert beneficial effects on brain health due to their positive impact on endothelial function and other aspects of the vasculature through an increased NO bio-availability, which may translate into increased CBF. Acute intake of trans-resveratrol, which is present in the skin of a range of foods including red grapes, raised CBF in healthy adults . In a randomized, placebo-controlled, crossover study, trans-resveratrol administration resulted in dose-dependent increases in prefrontal cortex CBF during task performances which activated this brain region, as assessed with NIRS. The performance of cognitive tasks was not changed. These results were in line with those of another clinical trial on the acute effects of 250 mg trans-resveratrol co-supplemented with 20 mg of piperine, which increases the bio-availability of polyphenols. More recently, the effects of long-term trans-resveratrol supplementation on CBF were investigated in 60 adult subjects between the ages of 18 and 30 years. In that study, a single 500 mg dose of trans-resveratrol on the first day of the study increased the CBF response in the frontal cortex during tasks which activate this brain region. However, this effect was not observed with transcranial Doppler ultrasound parameters after 28 days of daily supplementation of 500 mg of trans-resveratrol. In addition, no unambiguous evidence was provided to support that the intake of trans-resveratrol improved cognitive function. The effects of trans-resveratrol intake have also been investigated in populations at an increased risk of accelerated cognitive decline. In 36 older type 2 diabetic patients, acute consumption of 75 mg of trans-resveratrol significantly improved hypercapnia-induced mean blood flow velocity responses in major cerebral arteries by about 13%. Also, Evans and colleagues reported the beneficial effects of long-term trans-resveratrol supplementation in postmenopausal women aged 45–80 years. In that study, increases of 17% were found in the MCA mean blood flow velocity response to cognitive stimuli and hypercapnia. In addition, performance of a cognitive task in the domain of verbal memory and in overall cognitive performance improved, which correlated with improvements in transcranial Doppler ultrasound parameters. A smaller number of intervention trials has assessed the cerebral hemodynamic effects of other dietary polyphenols . In a double-blind, placebo-controlled,dutch buckets system crossover study with 27 healthy adults, the acute effects of a single oral dose of epigallocatechin gallate—the most abundant polyphenol found in green tea—were investigated on CBF. The administration of 135 mg of epigallocatechin gallate reduced cerebral perfusion in the frontal cortex during performance of cognitive tasks activating the frontal cortex, but no changes in cognitive performance were observed. More recently, a human crossover trial was conducted on the cerebrovascular effects of flavanol-rich cocoa. Regional CBF was measured using ASL prior to and two hours following consumption of a high flavanol drink or placebo. In agreement with an RCT involving healthy young men and a pilot trial of four healthy females consuming 516 mg of cocoa flavonoids, acute improvements in resting CBF were observed following the consumption of the high cocoa flavanol drink in eight men and ten women aged 50–65 years. More specifically, higher resting CBF was observed in both the anterior cingulate cortex and the central opercular cortex of the left parietal lobe. Unfortunately, the effects on cognitive performance were not evaluated. In another study with elderly subjects, a single dose of cocoa increased hypercapnia-induced MCA blood flow velocity.

The effects, however, were not evident after a daily supplementation of 900 mg of cocoa flavanols for one week. Finally, Bowtell and colleagues investigated the effects of twelve weeks of blueberry concentrate supplementation on cerebral perfusion using ASL MRI in healthy elderly adults. The concentrate provided 387 mg of anthocyanins . They found improvements in gray matter CBF in the parietal and occipital lobes, as well as some evidence suggesting an improved working memory after blueberry versus placebo supplementation.Haast and Kiliaan recently summarized the effects of the n-3 long-chain polyunsaturated fatty acids , eicosapentaenoic acid , and docosahexaenoic acid , which are predominantly found in fatty fish and fish oils, on indicators of brain health. These dietary fatty acids can be incorporated in all lipid fractions. In fact, LC-PUFAs may have anti-inflammatory effects and increase the fluidity of cell membranes. They may also improve vascular endothelial function and arterial stiffness, which are both important mechanistic determinants of CBF. Two human intervention trials investigating the longer-term effects of these fatty acids on CBF were discussed by Haast and Kiliaan. One study indeed showed that regional cerebral perfusion in the prefrontal cortex improved during the performance of nine computerized cognitive tasks. In the study, 65 healthy adults received for 12 weeks a daily DHA-rich fish oil supplement of either 1000 or 2000 mg. The total daily dose of LC-PUFAs for the 1000 mg fish oil group was 450 mg DHA + 90 mg EPA, and for the 2000 mg fish oil group, these amounts were 900 mg DHA + 180 mg EPA. Relative changes in the concentration of hemoglobin were assessed in the prefrontal cortex using NIRS. However, no beneficial effects on cognitive performance were found. Furthermore, PET experiments in humans that were injected intravenously with labeled DHA showed that the rate of DHA incorporation into brain lipids correlated with the regional CBF in that particular region. Konagai and colleagues provided further evidence of the beneficial modulation of cerebral hemodynamics in the prefrontal cortex during working memory task completion, when 45 elderly men were supplemented for 12 weeks with n-3 LC-PUFAs from krill oil. However, these findings could not be replicated in a recent large RCT. In 86 healthy older adults who reported memory deficits, no effects of long-term supplementation of 2000 mg DHA-rich fish oil alone or in combination with other nutrients were observed on NIRS measures during task performance or on cognitive demand battery task outcomes. These results, however, should be interpreted with caution. As discussed by the authors, the study was limited by the fact that the utilized methodology only provides a measure of acute changes that have taken place during each discrete recording session. This limitation should be taken into account in long-term supplementation studies, because long-lasting changes in hemodynamic parameters between consecutive recording sessions—undetectable by NIRS—might be induced. The most widely consumed psychoactive compound is caffeine, which is found in various drinks and foods, such as coffee, tea, soft drinks and chocolate. Caffeine is a well-known cerebral vasoconstrictor, which significantly reduces resting cerebral perfusion by antagonizing adenosine receptors in the human brain, especially A1 and A2A subtypes that mediate vasodilation. By using PET methodology, Cameron et al. quantified the magnitude of the decrease in CBF in 1990. A single dose of 250 mg caffeine reduced resting CBF, with decreases ranging from 22% to 30%; this is in line with later studies using ASL and PET. Recently, Turnbull and colleagues evaluated the literature with respect to the effects of acute caffeine intake on CBF in adult subjects. Trials investigating intakes of 175 mg observed significant decreases in CBF in all study populations. Studies that administered lower doses only reported significant decreases in caffeine-naïve or low-caffeine consumers, but not in habitual consumers. Altogether, the authors of the recent review concluded that there is some evidence for a dose-response relationship between caffeine intake and CBF, with greater sensitivity in caffeine-naïve study subjects as compared to habitual caffeine consumers.

Most blueberry cultivars are highly to moderately susceptible to AFR

Although the inclusion criteria were defined inclusively, so that patients with individual clinical criteria of rapid disease progression could have been included, all patients fulfilled the Mayo classification criterion as primary reason for inclusion . Short-term KDIs did not show an acute effect on TKV in either arm. Indeed, dietary interventions in small animals are expected to result in earlier responses than in humans. Interestingly, one patient in the KD group showed a TKV reduction by 8.4% after KD with a return to baseline at the final study visit. This patient was consistently ketogenic throughout the KD and reached peak values for the KD group in acetone measurements. A recent study on intermittent fasting and caloric restriction in obese ADPKD patients showed positive effects with reduction of body weight and reduction of adipose lipid stores correlating with slowed kidney growth. However, the study did not measure ketone bodies and considering the type of intervention efficient ketosis is not expected. CR without limiting CHO intake hardly induces ketosis and in intermittent fasting a single daily CHO-containing meal interrupts ketogenesis. In studies examining non-ADPKD patients, similar dietary regimens only intermittently lead to very low levels of ketosis. Nevertheless,strawberry gutter system it is indeed possible that low ketone body levels potentially reached may have contributed to their findings. Whether longer-lasting KDIs have beneficial effects on TKV should be further investigated in larger studies.

A randomized controlled clinical trial on this topic is currently ongoing and another study has been announced. TLV measurements showed a significant decrease in 8/10 patients. However, after returning to a CHO-rich diet, there was a clear-cut, prompt rebound. Glucose restriction as in ketosis results in a depletion of liver glycogen stores. Considering that we only found significant changes in the non-cystic liver parenchyma this is likely to be the reason for the reversible TLV changes. In non-ADPKD patients, reductions in liver volume due to low-calorie diets have been widely described and are commonly exploited in bariatric surgery. However, a final conclusion on this topic and the effects of KDIs on cystic and noncystic liver parenchyma in ADPKD will require analyses of larger cohorts and longer-term intervention in patients with severe polycystic liver disease , also considering the fact that our study included a high proportion of patients with a very low liver cyst fraction. Consequently, it is worth further investment in this regard taking into account the complete lack of efficient therapeutic options for PLD. Rebounds of TLV have also been described after discontinuation of disease modifying drug therapy with somatostatin analogues in PLD. Hunger occurred significantly more often in the WF group, while an increased feeling of fullness was occasionally reported in the KD group. Two patients reported self-limited palpitations. Ketogenic diets can lead to prolonged QT time with an increased risk of cardiac arrhythmias. Under ketosis, regular electrocardiogram checks should be considered for patients at risk. Apart from this, no safety-relevant physical complaints, in particular no gout attacks, no kidney stones and no hypoglycemia, were observed.

We observed a statistically significant increase in total cholesterol and LDL-C in the KD group and an almost statistically significant increase of LDLC in the WF group. It is known that KDs and WF can lead to at least transient increases in LDL-C and total cholesterol, most likely through depletion of adipose lipid stores and— for KD—additionally increased intake of fatty acids. While cholesterol levels normalize after cessation of fasting, ketogenic diets have historically shown inconsistent effects on cholesterol and LDL-C levels. However, potential increases in total cholesterol and LDL-C may normalize on longer-term ketogenic diets. KDs are known to have several beneficial effects on cardiovascular disease risk, such as improvements in body weight, insulin resistance, blood pressure, HbA1c levels or inflammatory markers. Besides, the increase in LDL-C is mainly due to large LDL particles, not the more atherogenic small dense LDL particles. However, elevated LDL-C levels are a clearly defined cardiovascular risk factor in clinical practice, regardless of their sub-typing, and chronic kidney disease is a state of increased cardiovascular risk in general. Consequently, prospective long-term studies are needed to draw a definitive conclusion on the effects of a prolonged ketogenic diet on cardiovascular risk in ADPKD patients. Furthermore, there was a significant increase in uric acid resulting in a hyperuricemia in both groups after the KDIs. Increases in uric acid under ketogenic metabolism and fasting have already been described multiple times. One of the reasons for hyperuricemia is competition between BHB and uric acid for the same kidney transport sites. Uric acid levels returned to baseline after resumption of a CHO-rich diet in both groups, while no gout attacks or kidney stones were observed. Whether the increase in uric acid is clinically meaningful will require larger longer term trials. Patients at risk should be monitored during KDIs and appropriate measures, e.g. prescription of citrate, may be considered. We also detected a significant increase in serum bilirubin levels in our WF group .

Such increases upon fasting are known and considered not to be clinically relevant. The KDIs led to a significant weight loss and a reduction in body fat that persisted even after returning to a CHOrich diet. Beneficial effects of KDIs, such as improved body weight and anti-inflammatory effects, have been discussed to outweigh the possible adverse effects on CVD risk associated with cholesterol increases and have a protective role in NAFLD. Regarding ADPKD, a recent study indicated that weight reduction in overweight patients may slow the rate of kidney growth compared with historical data and obesity has been shown to be associated with disease progression. Previously reported effects of KDIs on blood pressure were not observed in our trial which may be a consequence of the short period. However, blood pressure medication had to be stopped in one patient due to a symptomatic blood pressure decrease upon starting KD. In total, 80% of all patients reached the combined feasibility and metabolic endpoint. This is in line with recent studies indicating good feasibility of KDIs in ADPKD patients. Besides, Hopp et al. recently reported good feasibility in their 1-year weight-loss trial in ADPKD patients. Some side effects of KDs, like the “keto flu,” occur mainly in the beginning. Therefore, the feasibility of KDs might be even better with longer intake and adaptation to the diet. Taken together there appears to be general good acceptance of dietary interventions among ADPKD patients.This study has several limitations: most importantly, the small number of participants needs to be considered when interpreting the results of statistical testing. Second, there was a gender imbalance, with 80% of participants being male, which limits the comparability of our data. The KDIs were of short duration. Whether longer-term KDIs thus have a more significant effect, e.g. on TKV, remains unclear. The BHB and acetone cutoffs were based on a limited amount of data. The aim of the present study was to investigate dietary interventions that can be accessible for a wide community which would not be possible if aiming for deep ketosis and stay clearly in the ketogenic range. Most investigators agree that normal BHB values are between 0.1 and 0.5 mmol/L. Since we were not aiming for deep ketosis, we therefore defined the ketosis range from a BHB value of 0.8 mmol/L, which is significantly above these values and should roughly correspond to an acetone level of 10 p.p.m. . Also, the manufacturer recommendations indicate a target ketosis range between10 and 40 p.p.m. with the cutoff to ketosis indicated as low as 5 p.p.m. acetone in breath. In conclusion,grow strawberry in containers in our proof-of-principle trial, short-term KDIs in ADPKD are safe, feasible and triggered ketosis effectively but did not show an acute impact on TKV. Larger studies are required to further investigate the potential beneficial effects of KDIs in ADPKD.Anthracnose fruit rot , caused by the fungal pathogen Colletotrichum fioriniae Marcelino & Gouli , is among the most destructive and widespread fruit disease of blueberries. The infection of C. fioriniae impacts fruit quality and can result in a complete loss of post-harvest yield. Colletotrichum species have been reported to infect numerous other high-valued fruit crops, including apple, citrus, and strawberry. Infections occur as early as fruit set but remain latent until the fruit ripens, complicating the disease’s detection and protection. Initially, sunken areas develop on the fruit surface, followed by the exudation of salmon-colored spore masses . Fungicides remain the primary method to mitigate AFR infection in cultivated blueberry. However, they are often expensive and not a favorable option for growers. Moreover, some of these fungicides are suspected carcinogens, whereas others are prone to fungicide resistance development. Often, fungicide sprays are more frequently used than necessary because of the difficulty in optimizing spray timing due to the long latency period and variable weather conditions inf luencing the pathogen life cycle. Therefore, the development of AFR-resistant cultivars is highly desired by the blueberry industry. Several highly resistant cultivars have been identified, including northern high bush Vaccinium corymbosum L. ‘Draper’, which display strong resistance in the field and in laboratory inoculation studies.

The genome of ‘Draper’ was previously sequenced for three primary reasons: it is a commonly utilized parent in breeding programs, it is widely cultivated worldwide as an early mid-season ripening variety, and it is highly resistant to AFR. However, to our knowledge, no cultivars exhibit complete resistance. In these studies, C. fioriniae had differential infection strategies and infection rates in resistant versus susceptible cultivars. Furthermore, Miles and Hancock recently reported that resistance to AFR infection is highly heritable and argue that there are likely only a few loci involved in resistance. However, the underlying genetic mechanism of resistance to AFR remains poorly understood in blueberry and other fruit crops. Blueberry fruits contain a high concentration of many phytochemicals, including compounds with known anti-fungal properties. One potential component of resistance to AFR could involve specialized metabolites. For example, quercetin 3-O-rhamnoside is a f lavonol glycoside synthesized from the amino acid precursor L-phenylalanine via the phenylpropanoid pathway whose antimicrobial activity has been demonstrated against C. fioriniae, Pseudomonas maltophilia, and Enterobacter cloacae. In fact, treating susceptible blueberry fruits with a 4% solution of extract from resistant fruit containing quercetin3-O-rhamnoside, among other anthocyanins and non-anthocyanin f lavonoids, decreased C. fioriniae infection by 88%. Quercetin and its glycosides have been studied in other systems, but the dynamics of these compounds remain poorly understood in blueberry. Quercetin glycosides may be deglycosylated, leaving the bio-active core, quercetin. Structural analysis of plantderived f lavonoids revealed that quercetin contains numerous structural components important in bioactivity against certain pathogens, including methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, and Burkholderia cepacia. Furthermore, quercetin may be oxidized to form quinones, antifungal compounds previously shown to be effective against certain Colletotrichum species. However, previous studies have also proposed that AFR resistance in ripe blueberries may be due to an interaction between simple phenolic compounds and organic acids and not necessarily individual fungitoxic compounds. Here, we used a genetic mapping approach to identify genomic loci associated with resistance to AFR infection in northern high bush blueberry. We generated an RNAseq dataset to identify which genes are differentially expressed during infection in ‘Draper’ mature fruits. Finally, we performed metabolite profiling in mature fruits and identified a metabolite with properties consistent with a quercetin rhamnoside whose abundance is positively correlated with AFR resistance.AFR is a top disease priority for the blueberry industry, as it can result in up to 100% post-harvest yield loss. Thus, growers have largely relied on fungicides to maximize yields. Both the infection and resistance mechanisms of AFR are highly variable among and within crops . Resistance may arise from passive mechanisms such as physiological fruit characteristics and pre-existing compounds with anti-fungal properties. Immature fruits often exhibit many features that lend themselves to resistance to anthracnose, such as firmness, pH, and antimicrobial compounds . However, these resistance factors tend to dampen as fruit matures. Further, the accumulation of soluble sugars in conjunction with ascorbic acid was previously associated with anthracnose resistance in guava. Work in blueberry indicates a connection between sugar content and anthracnose resistance, but some moderately susceptible cultivars have high sugar concentrations. This suggests that sugar content may be only one piece of a multi-factor resistance mechanism. Additionally, the abundance of certain fruit volatiles, including -Hex-2-enal, has been linked to fruit rot resistance in strawberry. While some of these volatiles are also found in blueberry, their presence and quantity are not correlated with resistance.

Eight quadrats at each plot were utilized to record under story plants and tree seedling densities

A global overview of climate induced forest mortality provides a detailed assessment of events driven by climatic water/heat stress since 1970; few of these documented die back events provide opportunity to examine vegetation changes that occur over a longer time frame. Yellow-cedar , a species distributed from the northern Klamath Mountains of California to Prince William Sound in Alaska, has been dying in southeast Alaska since the late 1800s with intensifying rates observed in the 1970s and 1980s . Recent research reveals a complex ‘‘tree injury pathway’’ where climate change plays a key role in a web of interactions leading to widespread yellow-cedar mortality, referred to as yellow-cedar decline . Prominent factors in this injury pathway include cold tolerance of roots, timing of dehardening, and regional trends of reduced snowp ack at low elevations . Early springtime thaws trigger dehardening and reduce snow cover that insulates soil and shallow fine roots from periodic extreme cold events; this can lead to injury of yellow-cedar roots to initiate tree mortality, which is predominantly limited to lower elevations . Despite the extent of research on the mechanisms of decline, over story and under story dynamics in declining stands are not well understood . The direct loss of yellow-cedar has important ecological, economic, and cultural implications; however, other changes are also relevant in these forests that emerge in response to decline. Researchers are just beginning to understand the influence of dead cedars on watershed nutrient export . Economically and culturally,grow bucket yellow-cedar trees are important because they provide valuable products for Alaska Native communities and the forest industry . These coastal forests also provide forage for the Sitka black-tailed deer , an important game animal throughout the region.

Since the 1980s, much forest-related research in southeast Alaska has addressed the implications of various active forest management regimes on habitat of this commonly hunted species and biodiversity ; aspects of this research centered on old growth habitat and the effects of land use practices, such as clear cutting or partial cutting on forage . To date, researchers have not addressed the effects of yellow-cedar decline on the availability of key forage species. Death of yellow-cedar and the shifts in plant community dynamics in forests affected by decline can have cascading effects on the human-natural system by affecting the ecosystem services these forests provide . We studied the process of forest development using a chronosequence to compare forests unaffected by widespread mortality with those affected at different time points over approximately one century. Considering size classes from seedlings to large trees across the chronosequence, our analysis of the conifer species populations at various life history stages, including death, documented changes occurring in forests affected by decline, and extended a view of forest composition and structure into the future. We hypothesized that: western hemlock and other conifers increase in importance as the contribution of yellow-cedar to the conifer community structure is reduced over time, seedling and sapling regeneration increases as yellow-cedars die and the canopy opens, community composition of understory plants changes over time such that shrubs increase in abundance, and the volume of key forage species for the Sitka black-tailed deer increases in forests affected by decline. Our study illustrates the long-term consequences for many plant species when a single tree species suffers from climate-induced mortality.

Modern climate in the southeast region of Alaska is mild and hypermaritime with year round precipitation, absence of prolonged dry periods, and comprised of comparatively mild season conditions than continental climates at similar latitudes . Mean annual rainfall measured in Sitka and Gustavus, the two closest towns to the remote, outer coast study area, measure 2200 and 1700 mm, respectively. The high rainfall that occurs throughout the Alexander Archipelago, combined with its unique island geography, geologic history, and absence of fires maintain some of the most expansive old-growth forests found in North America. Five common conifer species occur on the northern range of the Archipelago: western hemlock , mountain hemlock , yellow-cedar, Sitka spruce , and shore pine . These coastal forests are simple in composition yet often complex in age and tree structure . Yellow-cedar occurs across a soil-drainage gradient from poorly drained bogs to well-drained soils on steeper slopes that often support more productive stands . This study occurs in the northern portion of the yellow-cedar population distribution and at the current latitudinal limits of forests affected by decline. We centered our investigation on protected lands in four inlets in the Alexander Archipelago on the outer coast of the West Chichagof-Yakobi Wilderness on Chichagof Island in the Tongass National Forest and Glacier Bay National Park and Preserve . Aerial surveys were conducted in 2010 and 2011 to assess the presence of affected forests and to identify the edge of yellow-cedar die back that occurs south of GLBA on Chichagof Island. Aside from a brief history of small-scale gold mining that occurred in several areas on Chichagof Island between 1906 and 1942, there is little evidence of human impact on these lands, making them ideal for studying ecological dynamics.

Drawing upon previous studies that estimated the time-since-death for five classes of standing dead yellow-cedar trees at various stages of deterioration , our plot selection consisted of sequential steps, in the field, to sample forests representative from a range of time-since-death. Not all yellow-cedar trees in a forest affected by mortality die at once; mortality is progressive in forests experiencing die back . Highly resistant to decay, these trees remain standing for up to a century after their death . As a result, they offer the opportunity to date disturbance, approximately, and to create a long-term chronosequence. First, we stratified the study area coastline into visually distinguishable categories of ‘‘cedar decline status’’ by conducting boat surveys and assessing cedar decline status across 121.1 km of coastline in June 2011 and 2012. We traveled the coastline and made visual observations of live and dead yellow-cedar trees and their snag classes. We assigned cedar decline status to coastal forests at 100 m increments using a GPS Garmin 60 CSx . Next, using the ArcGIS 10.2 Geographic Information System software , we randomly generated plot locations in forests categorized during the coastline survey as follows: live, unaffected by mortality; recent mortality; mid-range mortality; and old mortality. Lastly, we controlled for basal area and key biophysical factors, including elevation and aspect via methods described. Plots were restricted to elevations less than 150 m, excluding northeast facing plots, to sample from low-elevation plots representative of conditions where yellow-cedar decline commonly occurs at this latitude . Plots were randomly located between 0.1 and 0.5 km of the mean high tide to avoid sampling within the beach fringe area, and on slopes ,72% to limit risk of mass movement . We excluded plots with a totalbasal area ,35 m2 /ha to avoid sampling below the optimal niche of yellowcedar . This control was performed in the field by point sampling to estimate basal area using a prism with a basal area factor 2.5 . Plots dominated by the presence of a creek bed or other biophysical disturbance were eliminated from plot selection,dutch bucket for tomatoes due to the confounding influence of disturbance on the number of trees standing and species abundance. A minimum distance of 300 m was maintained between all plot centers. By restricting our sampling to these controls, our study was designed to examine the process of forest development post-decline in low-elevation coastal forests with plot conditions typical for yellow-cedar mortality excluding bog wetlands, where yellow-cedar may co-occur sparsely with shore pine. After controlling for biophysical factors, 20 plots were sampled in live forests and 10 plots in each of the affected cedar status categories for a total of 50 plots across the study area .Data were collected in fixed, circular nested plots to capture a wide range of tree diameters and in quadrats within each plot to account for spatial variability in under story vegetation. Forty plots were established and measured during the 2011 field season and 10 plots during the 2012 field season, through the seasonal window of mid-June to mid-August. Nested circular plots were used to sample trees and saplings as follows: a 10.3 m fixed radius plot for trees 25.0 cm diameter at breast height , a 6.0 m fixed radius plot for saplings ,2.5 cm dbh and 1 m height, and trees 2.5–24.9 cm dbh. We counted live saplings of each species to analyze the population dynamics for individuals that survive to this size class. For each tree, we recorded species, dbh to the nearest 0.1 cm, height to the nearest 0.01 m, dead or live, and for dead trees snag classes I–V.

To provide an additional long term view of species changes, we recorded counts for smaller conifer seedlings , identifying western hemlock and mountain hemlock to genus, and other conifers to species. We noted presence/ absence of each conifer species 10–99 cm, but did not sample this size class for individual counts. We recorded maximum height and percentage cover of each plant species observed according to the Daubenmire method on a continuous scale . In unique cases where consistent identification to species was difficult Salisb.; Vaccinium ovalifolium Sm., and V. alaskaense Howell, we combined observations but noted both species presence for total richness across the study area. Blueberries, V. ovalifolium and V. alaskaense, are similar in appearance and often synonymized . Mosses and liverworts were recorded together as bryophytes within the quadrat. Sedges were recorded together but distinguished from true grasses .The changes observed across the chronosequence provide strong evidence that this species die back associated with climate change can result in a temporally dynamic forest community distinguished by the diminished importance of yellow-cedar, an increase in graminoid abundance in the early stages of stand development, and a significant increase in shrub abundance and volume over time. Tree mortality timing and intensity, as characterized by our stratified sampling of cedar decline status, played an important role in determining the under story community composition and over story processes of stand re-initiation and development. Our results highlight the ways in which widespread mortality of one species can create opportunities for other species and underscores the importance of considering long-term temporal variation when evaluating the effects of a species die back associated with climate change. Methods for predicting future changes in species distributions, such as the climate envelope approach, rely upon statistical correlations between existing species distributions and environmental variables to define a species’ tolerance; however, a number of critiques point to many factors other than climate that play an important role in predicting the dynamics of species’ distributions . Given the different ecological traits among species, climate change will probably not cause entire plant communities to shift en masse to favorable habitat . Although rapid climatic change or extreme climatic events can alter community composition , a more likely scenario is that new assemblages will appear . As vulnerable species drop out of existing ecosystems, resident species will become more competitive and new species may arrive through migrations . Individual species traits may also help explain the process of forest development in forests affected by widespread mortality, as the most abundant species may be those with traits that make them well-adapted to changing biotic and abiotic conditions . We were unable to evaluate the independent effect of soil saturation on canopy openness, but the fact that canopy openness was a significant predictor of shore pine and mountain hemlock sapling occurrence suggests the important roles of soil conditions and light in determining which species are more likely to regenerate. Both species are known to have preferences for wet soils and scrubby open forests , and canopy openness in forests affected by decline has two driving components: soil saturation and crown deterioration caused by yellow cedar death . Young mountain hemlock seedlings, for example, grow best in partial shade , likely explaining why this species regenerated relatively well as saplings in recent mortality before canopy openness increased further. In contrast, western hemlock is known to tolerate a wide range of soil and light conditions for establishment and growth and seeds prolifically, as does Sitka spruce . Species can also respond to varying light conditions with differential growth responses. Western hemlock reached maximum growth rate when exposed experimentally to relatively high light intensities, whereas bunch berry responded most strongly to relatively low light intensities .

Water on Mars has always been of interest for physical and chemical reasons

The amino acid α-carbon provides for two mirror image configurations based on the relative orientation of the side group . Terrestrial biological amino acids consist of only one configuration , however, there is no reason why proteins in extraterrestrial life would need to be based on L-amino acids as on Earth. Proteins as catalytically active as their natural biological L-amino acid counterparts have been synthesized of entirely D-amino acids , thus it is assumed that life elsewhere could be based on either L- or D-amino acids. Amino acid homochirality associated with extant terrestrial life changes over time after the bacterial community becomes deceased due to racemization. When living, the protein turnover time is sufficient to preserve the homochiral protein composition, however after death, the amino acids interconvert from the biological L-enantiomer to the abiological D-enantiomer. This interconversion is a natural process that becomes significant over geological timescales and continues until they are present in equal abundances, that is a D/L-ratio equal to 1. The D/L-enantiomer ratio along with known rates of racemization has been useful in determining the geological age of terrestrial samples up to hundreds of millions of years old . Although racemization compromises the microbial signature of terrestrial proteins over geological timescales, the determination of amino acid chirality still offers a powerful biosignature for the presence of microbial life. The detection of amino acids alone is not unequivocal evidence of life,hydroponic nft channel rather a homochiral signature is necessary to confirm a biological amino acid source.

Although sufficiently old biological samples may show racemic signatures similar to those derived from abiotic syntheses, well preserved amino acids from extinct bacterial communities at extremely cold temperatures would still show good chirality preservation for hundreds of millions of years. In future life detection experiments, the chirality of amino acids should easily discriminate between biological amino acids and those which may have formed abiotically or derived from meteorite influx.Known abiotic pathways exist for the formation of amino acids such as spark discharge experiments and laboratory hydrothermal syntheses, however they are all known to produce equal amounts of D- and L- amino acids in low concentrations. This marked difference between homochiral biological and racemic abiotic composition permits the discrimination of the source of the detected amino acids by resolving their enantiomeric abundances . Also important is that abiotic amino acid syntheses tend to form a relatively small suite of amino acids compared to those utilized in bacterial proteins. The suite of protein amino acids utilized in the bacterium E. coli is evaluated in Chapter II and compared to previous empirical studies. If detected amino acids are too old or degraded for any chiral signature to be deduced, the distribution can be used to definitively decide the source of the amino acids as microbially or abiotically derived. A variety of amino acids have been detected in meteorites as well, but these are interpreted as having formed during parent-body processes. The fact that amino acids within meteorites are all racemic , and that they show a suite of amino acids similar to those formed in abiotic syntheses, makes them easy to distinguish from biologically sourced amino acids again based on chirality or distribution. Any preferential dominance of Lamino acids detected in meteorites is assumed to be due to terrestrial contamination . Certain amino acids within the large suite of amino acids detected in meteorites are not components of terrestrial proteins, rather they are known to be indigenous because they are unique to meteorites and reflect formation during parent-body processes .

The two most abundant extra-terrestrial amino acids are isovaline and aminoisobutyric acid , however there are a variety of others that are recognized as a indicative of an extraterrestrial signature . The presence of these amino acids in geological samples is suggested to reflect deposition by during a period of high meteoritic influx.The most relevant amino acid biomarkers depend on their relative abundances in bacterial proteins and the stability of the individual residues. There are two major amino acid diagenetic pathways, degradation and racemization . The rates associated with degradation are slower than racemization by at least a factor of 100 in most cases. The most stable protein amino acids will persist through geological time and allow for the quantification of long extinct bacterial communities. Amino acids degrade primarily by decarboxylation or deamination but other processes like dehydration and aldol cleavage can also be significant .The most commonly occurring amino acids in ancient and degraded microbial communities are glycine and alanine , a finding corroborated by the analyses of natural samples of anoxic sediments . Glycine and alanine are both present in bacterial communities in very high abundances, however, only alanine shows degradation rates among the slowest of the amino acids . This implies that glycine may be better preserved in geological settings or that diagenetic pathways lead to its secondary formation from other compounds. Regardless, both alanine and glycine remain among the most important amino acids to assay for in geological samples as well as asp, glu, and ser. Valine, present in lower abundances than these other amino acids, shows slow racemization kinetics and degradation kinetics and should show good preservation in environmental samples despite composing only ~5% of total bacterial protein. The plots in Figure 1.6 show the evolution of aspartic acid concentration and D/L-ratio versus time. Aqueous rates of aspartic acid degradation and racemization were used in these models and therefore represent the fastest rates of these reactions.

Racemization is a much faster process which results in a racemization half-life of ~2,200 years whereas the half-life of aspartic acid degradation is ~10,000 years. Environmental samples always show slower degradation and racemization in colder and dryer conditions.The aspartic acid racemization rates in dry environmental conditions have been reported to be as slow as 1.20 x 10-6 yr-1 and 6.93 x 10-7 yr-1 , equivalent to half-lives of ~600,000 and 1,000,000 years, and therefore must be evaluated carefully for each geological sample for the purposes of amino acid racemization age dating . Likewise, any degradation reactions are equally dependent on the mineralogy of the environmental sample and may be accelerated by the presence of metal ion catalysts . Racemization age dating has been suggested to be applicable to amino acids from hundreds of thousands to millions of years old at low temperatures, but this range can be extended to older samples under colder conditions. Likewise, amino acids from hundreds of millions of years old up to billions of years could be well preserved under the appropriate environmental conditions . Target bio-molecules in the search for evidence of life on Mars must be stable enough to persist for geological timescales so that evidence of life on Mars does no go undetected. The fate of amino acids includes racemization, degradation, and bacterial uptake. In the absence of biological processing , racemization is faster than degradation by at least 100x. Racemization involves a planar carbocation intermediate formed by the loss of a proton on the α- carbon and subsequent attack on the top or bottom by another proton . The reactions for the destruction of amino acids include decarboxylation to amine compounds or deamination. These rates are highly matrix and temperature dependent and therefore must be evaluated for the specific environmental conditions. Although the prevailing cold and dry conditions on Mars tend to drastically increase the lifetimes of organic degradation and amino acid racemization , there are other effects that must be considered. For instance,nft growing system the surrounding mineral matrix can catalyze amino acid diagenetic reactions, especially degradation in the presence of metallic ions . Therefore, the specific preservation of organic material will be strongly a function of chemical environment.If intact amino acids are detected and show an abundance of one enantiomer over the other, this would unequivocally show that the source of these amino acids was biological. If these biomarkers from extinct life on Mars were to have been degraded over geological timescales, there are certain classes of compounds that we would expect to be diagenetic end products or intermediates. Compounds such as humic acids and kerogen are products of the diagenesis of organic matter over time, however there may be generation of other diagenetic products due to the slow degradation of amino acids over time that might indicate what might be favored on Mars in terms of diagenetic products. For instance, decarboxylation is known to be a the primary degradation reaction amino acids such as glycine, alanine, and valine and form their corresponding amine degradation products .

The study of organic inclusion in terrestrial Martian analogs allows for the characterization of similar types of environments on Earth as detected on Mars so that we can understand some of the processes that might be important on Mars. The study of organics in Mars analog minerals can offer an idea of the sequestration potential and stability of these deposits on Earth. Indeed if Mars really experienced warm and wet climate early in its history , it may have been more similar than we realize to Earth and may have a lot in common with many of the proposed Mars analog locations. The determination of the stability within terrestrial Mars analog minerals can help to approximate biochemical stability that might be expected on Mars. It turns out that low levels of amino acid degradation products that indicate diagenetic processes can often be used to determine the stabilities or diagenetic state of the included amino acids.Figure 1.9 shows the general geological history of Mars dominated by an early wet era in which clays were formed by water alteration. There was a catastrophic climate change around ~3.5 billion years ago. Water is a medium for interesting chemistry to occur, it provides a location for the origin of life, and geologists can use water abundance to explain many of the erosive features on Mars. Therefore, preservation becomes the key issue when talking about finding evidence of life on Mars. The evidence of an extinct Martian biota might be from a biological community billions of years old and must show good preservation over the history of the samples. The idea of using chirality as a bio-signature was first proposed by Halpern to search for evidence of life on Mars. This idea has resurfaced in the current strategies for life detection recognizing for over 30 years the strength of amino acid chirality as a biosignature and discrimination versus abiotic amino acid signatures . On Mars, racemization kinetics are expected to be extremely slow because of the cold, dry conditions, and any chiral signature of extinct life should be preserved for billions of years . The harsh surface conditions on Mars may limit the survival of some organics within the host regolith . Because amino acid diagenesis is so intimately linked with matrix effects, the study of amino acid preservation and diagenesis in terrestrial Mars analogs is necessary to make predictions on the best locations to search for biosignatures on Mars. Extrapolation of these diagenetic reaction rates to Mars’ surface temperatures can allow for estimates of amino acid stability and rates of diagenetic reactions on Mars.This dissertation covers my investigations of organic inclusion and sequestration within various Mars analog minerals. Throughout these studies, amino acids are investigated for their applicability as biomarkers for the detection of extinct or extant microbial communities on Mars. A variety of environments that have been suggested as analogous to Mars for mineralogical or climatic conditions are profiled and in some cases, rate data is gleaned from the coupling of amino acid degradation reactions and extrapolated to predicted rates on Mars. The stabilities of amino acids in analog minerals essentially sequesters them and offers some degree of protection from harsh surface conditions, allowing for enhanced preservation in some cases. Specifically, these studies investigate amino acid diagenetic reactions including racemization and degradation to try and predict the degree of survival of these biosignatures over geological timescales upon the surface of Mars. Chapter 2 characterizes the amino acid composition in bacteria and verifies our methods of analysis used in these studies. The amino acid distributions and concentrations are so markedly different from any type of abiotic formation process that discrimination between these processes should be possible even over very long timescales. Chapter 3 introduces a new chemical chronometer based on the detection of amino acid degradation products within ancient geological samples.

The biological mechanism behind this winter recovery has been studied but is not fully resolved

Infections that occur during spring lead to chronic disease ; however, infections that occur during late summer and fall may cause disease symptoms in the current year, but a high proportion of vines lack symptoms of X. fastidiosa infection in the following year . Nonetheless, models that incorporate low temperatures have substantial explanatory power in predicting rates of winter curing of X. fastidiosa infections in grapevine . Infections that occur early in the season may have a longer period during which X. fastidiosa can colonize and reach high infection levels, which may increase the likelihood of the disease surviving over the winter. Following this rationale, if most late-season infections remain in the distal ends of shoots and have lower infection levels, removing the symptomatic portion of the vine might eliminate X. fastidiosa. In other words, the efficacy of pruning infected grapevine tissue could depend on both the time of year in which the plant was infected and on winter temperature. A potential benefit of severe pruning versus replanting is that pruning leaves a mature root stock in place, which is likely to support more vigorous regrowth compared to the developing root stock of a young transplant . Recent attempts to increase vine productivity by planting vines with more well-developed root systems are based on this presumption. However, even if severe pruning can clear vines of infection,grow bag for blueberry plants it removes a substantial portion of the above ground biomass of the vine. Thus, a method for encouraging rapid regrowth of the scion after aggressive pruning is needed. We studied the efficacy of pruning infected vines immediately above the root stock graft union—the most aggressive pruning method—for clearing grapevines of infection by X. fastidiosa.

We reasoned that if such severe pruning was ineffective at clearing vines of infection, less severe pruning would not be warranted; if severe pruning showed promise, less severe pruning could then be tested. We use the term “severe pruning” to refer to a special case of strategic pruning for disease management, analogous to the use of “remedial surgery” for trunk diseases . To test the efficacy of clearing vines of X. fastidiosa infection, we followed the disease status of severely pruned versus conventionally pruned vines over multiple years, characterized the reliability of using visual symptoms of PD to diagnose infection, and compared two methods of restoring growth of severely pruned vines.Pruning trials were established in Napa Valley, CA in commercial vineyards where symptoms of PD were evident in autumn of 1998. The vineyards used for these trials varied in vine age, cultivar, and initial disease prevalence . All study vines were cordon-trained and spur-pruned. We mapped the portions of the six vineyards selected for study according to evaluation of vines for disease symptoms. The overall severity of PD symptoms for each vine was recorded as follows: 0 = no symptoms, apparently healthy; 1 = marginal leaf scorch on up to four scattered leaves total; 2 = foliar symptoms on one shoot or on fewer than half of the leaves on two shoots on one cordon, no extensive shoot die back, and minimal shriveling of fruit clusters; and 3 = foliar symptoms on two or more shoots occurring in the canopy on both cordons; dead spurs possibly evident along with shriveled clusters. To test the reliability of the visual diagnosis of PD, petiole samples were collected from the six vineyard plots when symptom severity was evaluated for vines in each symptom category; these samples were assayed using polymerase chain reaction . Petioles were collected from symptomatic leaves on 25, 56, and 30 vines in categories 1, 2, and 3, respectively.

Next, severe pruning was performed between October 1998 and February 1999 in the six vineyard plots by removing trunks of symptomatic vines ~10 cm above the graft union. Cuts were made with saws or loppers, depending upon the trunk diameter. During a vineyard survey, severe pruning was conducted on 50% of vines in each symptom category; the other 50% of vines served as conventionally pruned controls. Sample sizes for control and severely pruned vines in each disease category ranged between six and 62 vines depending on the plot, with at least 38 total vines per plot in each control or pruned treatment. In spring 1999, multiple shoots emerged from the remaining section of scion wood above the graft union on severely pruned vines. When one or more shoots were ~15 to 25 cm long, a single shoot was selected and tied to the stake to retrain a new trunk and cordons, and all other shoots were removed at this time. We evaluated the potential of severe pruning to clear vines of infection, by reinspecting both control and severely pruned vines in all six plots for the presence or absence of PD symptoms in autumn 1999 and 2000. In all plots, category 3 vines were inspected in a third year ; in plot 6, vines were inspected an additional two years . Finally, in plot 6 we investigated chip-bud grafting as an alternate means of ensuring the development of a strong replacement shoot for retraining. To do this, 78 category 3 vines were selected for severe pruning, 39 of which were subsequently chip-bud grafted in May 1999. An experienced field grafter chip budded a dormant bud of Vitis vinifera cv. Merlot onto the root stock below the original graft union, and the trunk and graft union were removed. The single shoot that emerged from this bud was trained up the stake and used to establish the new vine. The other 39 vines were severely pruned above the graft union and retrained in the same manner as vines in plots 1 to 5. Development of vines in plot 6, with and without chip-bud grafting, was evaluated in August 1999 using the following rating scale: 1) “no growth”: bud failed to grow, no new shoot growth; 2) “weak”: multiple weak shoots emerging with no strong leader; 3) “developing”: selected shoot extending up the stake, not yet topped; and 4) “strong”: new trunk established, topped, and laterals developing. All analyses were conducted using R version 3.4.1 .

We used a generalized linear model with binomial error to compare the relative frequency of X. fastidiosa-positive samples from vines in the different initial disease severity categories . Next, we analyzed the effectiveness of chip budding versus training of existing shoots as a means for restoring vines after severe pruning. This analysis used multinomial logistic regression that compared the frequency of four vine growth outcomes the following season: strong, developing, weak, or no growth. This main test was followed by pairwise Fisher exact tests of the frequency of each of the individual outcomes between chip budded-trained and trained vines . We analyzed the effect of severe pruning on subsequent development of PD symptoms using two complementary analyses. First, we compared symptom return between severely pruned and control vines in the three symptom severity categories for two years after pruning. To appropriately account for repeated measurements made over time, our analysis consisted of a linear mixed-effects model with binomial error, a random effect of block, and fixed effects of treatment , year ,blueberry grow bag and symptom severity category . Next, we analyzed the rate at which PD reappeared in only severely pruned vines from category 3 in subsequent years using a survival analysis. Specifically, we used a Cox proportional hazards model with a fixed effect of plot .Accurate and time- or cost-efficient methods of diagnosing infected plants are important elements of a disease management program, both with respect to roguing to reduce pathogen spread , and the efficacy of pruning to clear plants of infection . Accurate diagnosis of PD in grapevines is complicated by quantitative and qualitative differences in symptoms among cultivars and other aspects of plant condition . Our results suggest that a well-trained observer can accurately diagnose PD based on visual symptoms, particularly for advanced cases of the disease. The small number of false positives in disease category 1 and 2 vines may have been due to misdiagnosis of other biotic or abiotic factors . Alternatively, false positives might indicate bacterial populations that are near the detection limit; conventional PCR has at least as low a detection threshold as other methods that rely on the presence of live bacterial cells . Regardless, although scouting based on visual symptoms clearly captured most cases of PD in the current study, some caution should be used when trying to diagnose early disease stages to ensure that vines are not needlessly removed. There is no cure for grapevines once infected with X. fastidiosa, except for recovery that can occur in some overwintering vines . The virulent nature of X. fastidiosa in grapevines, and the corresponding high mortalityrate for early season infections, increases the potential value of any cultural practices that can cure vines of infection. Moreover, new vines replanted into established vineyards generally take longer to develop compared to vines planted in newly developed vineyards, potentially due to vine-to-vine competition for resources that limits growth of replacement vines. As a result, vines replanted in mature vineyards may never reach full productivity . Thus, management practices that speed the regeneration of healthy, fully developed, and productive vines may reduce the economic loss caused by PD . A multinomial logistic regression showed significant differences in the relative frequency of different grapevine growth outcomes between the two restoration methods .

Chip-budded vines showed significantly lower frequency of strong growth and significantly higher frequencies of vines with developing growth and, especially, of no growth . Nearly 30% of chip-budded vines showed no growth in the following season, compared to 0% of vines on which established shoots were trained. These results indicate that training newly produced shoots from the remaining section of the scion was more likely to result in positive regrowth outcomes. As a result, of the two methods we evaluated, training of shoots that emerge from the scion of a severely pruned trunk is recommended for restoring growth. However, it is important to note that the current study did not estimate the amount of time required for severely pruned vines to return to full productivity. Moreover, the study did not include mature vines, in which growth responses may differ from young vines. Additional studies may be needed to quantify vine yield, and perhaps fruit quality, in severely pruned vines over multiple seasons. The usefulness of pruning for disease management depends on its ability to clear plants of pathogen infection . A comparison of symptom prevalence among severely pruned and control vines from different disease severity categories showed significant effects of the number of years after pruning , pruning treatment , and initial disease symptom category . The analysis also showed significant interactions between year and treatment and between treatment and symptom category , a non-significant interaction between year and symptom category , and a marginally significant three-way interaction . Overall, more vines had symptoms in the second year compared to the first , and there was a higher prevalence of returning symptom in vines from higher initial disease categories . Severe pruning showed an apparent benefit to reducing symptoms of PD after the first year, but this effect weakened substantially by the second year, with no differences for category 1 or 3 vines, and a slightly lower disease prevalence for severely pruned category 2 vines . A survival analysis of severely pruned category 3 vines showed a significant difference in the rate of symptom return among plots . All vines in plots 1 to 3 had symptoms by autumn 2000, two years after pruning . In plots 4 and 5, more than 80% of vines showed symptoms after three years. Only plot 6 showed markedly lower disease prevalence; in plot 6, ~70% and 50% of severely pruned category 3 vines showed no symptoms after two and four years, respectively, versus ~36% of control vines overall, after two years. It is important to note that at the time of this study, disease pressure may not fully explain the return of symptoms in severely pruned vines.