The biological mechanism behind this winter recovery has been studied but is not fully resolved

Infections that occur during spring lead to chronic disease ; however, infections that occur during late summer and fall may cause disease symptoms in the current year, but a high proportion of vines lack symptoms of X. fastidiosa infection in the following year . Nonetheless, models that incorporate low temperatures have substantial explanatory power in predicting rates of winter curing of X. fastidiosa infections in grapevine . Infections that occur early in the season may have a longer period during which X. fastidiosa can colonize and reach high infection levels, which may increase the likelihood of the disease surviving over the winter. Following this rationale, if most late-season infections remain in the distal ends of shoots and have lower infection levels, removing the symptomatic portion of the vine might eliminate X. fastidiosa. In other words, the efficacy of pruning infected grapevine tissue could depend on both the time of year in which the plant was infected and on winter temperature. A potential benefit of severe pruning versus replanting is that pruning leaves a mature root stock in place, which is likely to support more vigorous regrowth compared to the developing root stock of a young transplant . Recent attempts to increase vine productivity by planting vines with more well-developed root systems are based on this presumption. However, even if severe pruning can clear vines of infection,grow bag for blueberry plants it removes a substantial portion of the above ground biomass of the vine. Thus, a method for encouraging rapid regrowth of the scion after aggressive pruning is needed. We studied the efficacy of pruning infected vines immediately above the root stock graft union—the most aggressive pruning method—for clearing grapevines of infection by X. fastidiosa.

We reasoned that if such severe pruning was ineffective at clearing vines of infection, less severe pruning would not be warranted; if severe pruning showed promise, less severe pruning could then be tested. We use the term “severe pruning” to refer to a special case of strategic pruning for disease management, analogous to the use of “remedial surgery” for trunk diseases . To test the efficacy of clearing vines of X. fastidiosa infection, we followed the disease status of severely pruned versus conventionally pruned vines over multiple years, characterized the reliability of using visual symptoms of PD to diagnose infection, and compared two methods of restoring growth of severely pruned vines.Pruning trials were established in Napa Valley, CA in commercial vineyards where symptoms of PD were evident in autumn of 1998. The vineyards used for these trials varied in vine age, cultivar, and initial disease prevalence . All study vines were cordon-trained and spur-pruned. We mapped the portions of the six vineyards selected for study according to evaluation of vines for disease symptoms. The overall severity of PD symptoms for each vine was recorded as follows: 0 = no symptoms, apparently healthy; 1 = marginal leaf scorch on up to four scattered leaves total; 2 = foliar symptoms on one shoot or on fewer than half of the leaves on two shoots on one cordon, no extensive shoot die back, and minimal shriveling of fruit clusters; and 3 = foliar symptoms on two or more shoots occurring in the canopy on both cordons; dead spurs possibly evident along with shriveled clusters. To test the reliability of the visual diagnosis of PD, petiole samples were collected from the six vineyard plots when symptom severity was evaluated for vines in each symptom category; these samples were assayed using polymerase chain reaction . Petioles were collected from symptomatic leaves on 25, 56, and 30 vines in categories 1, 2, and 3, respectively.

Next, severe pruning was performed between October 1998 and February 1999 in the six vineyard plots by removing trunks of symptomatic vines ~10 cm above the graft union. Cuts were made with saws or loppers, depending upon the trunk diameter. During a vineyard survey, severe pruning was conducted on 50% of vines in each symptom category; the other 50% of vines served as conventionally pruned controls. Sample sizes for control and severely pruned vines in each disease category ranged between six and 62 vines depending on the plot, with at least 38 total vines per plot in each control or pruned treatment. In spring 1999, multiple shoots emerged from the remaining section of scion wood above the graft union on severely pruned vines. When one or more shoots were ~15 to 25 cm long, a single shoot was selected and tied to the stake to retrain a new trunk and cordons, and all other shoots were removed at this time. We evaluated the potential of severe pruning to clear vines of infection, by reinspecting both control and severely pruned vines in all six plots for the presence or absence of PD symptoms in autumn 1999 and 2000. In all plots, category 3 vines were inspected in a third year ; in plot 6, vines were inspected an additional two years . Finally, in plot 6 we investigated chip-bud grafting as an alternate means of ensuring the development of a strong replacement shoot for retraining. To do this, 78 category 3 vines were selected for severe pruning, 39 of which were subsequently chip-bud grafted in May 1999. An experienced field grafter chip budded a dormant bud of Vitis vinifera cv. Merlot onto the root stock below the original graft union, and the trunk and graft union were removed. The single shoot that emerged from this bud was trained up the stake and used to establish the new vine. The other 39 vines were severely pruned above the graft union and retrained in the same manner as vines in plots 1 to 5. Development of vines in plot 6, with and without chip-bud grafting, was evaluated in August 1999 using the following rating scale: 1) “no growth”: bud failed to grow, no new shoot growth; 2) “weak”: multiple weak shoots emerging with no strong leader; 3) “developing”: selected shoot extending up the stake, not yet topped; and 4) “strong”: new trunk established, topped, and laterals developing. All analyses were conducted using R version 3.4.1 .

We used a generalized linear model with binomial error to compare the relative frequency of X. fastidiosa-positive samples from vines in the different initial disease severity categories . Next, we analyzed the effectiveness of chip budding versus training of existing shoots as a means for restoring vines after severe pruning. This analysis used multinomial logistic regression that compared the frequency of four vine growth outcomes the following season: strong, developing, weak, or no growth. This main test was followed by pairwise Fisher exact tests of the frequency of each of the individual outcomes between chip budded-trained and trained vines . We analyzed the effect of severe pruning on subsequent development of PD symptoms using two complementary analyses. First, we compared symptom return between severely pruned and control vines in the three symptom severity categories for two years after pruning. To appropriately account for repeated measurements made over time, our analysis consisted of a linear mixed-effects model with binomial error, a random effect of block, and fixed effects of treatment , year ,blueberry grow bag and symptom severity category . Next, we analyzed the rate at which PD reappeared in only severely pruned vines from category 3 in subsequent years using a survival analysis. Specifically, we used a Cox proportional hazards model with a fixed effect of plot .Accurate and time- or cost-efficient methods of diagnosing infected plants are important elements of a disease management program, both with respect to roguing to reduce pathogen spread , and the efficacy of pruning to clear plants of infection . Accurate diagnosis of PD in grapevines is complicated by quantitative and qualitative differences in symptoms among cultivars and other aspects of plant condition . Our results suggest that a well-trained observer can accurately diagnose PD based on visual symptoms, particularly for advanced cases of the disease. The small number of false positives in disease category 1 and 2 vines may have been due to misdiagnosis of other biotic or abiotic factors . Alternatively, false positives might indicate bacterial populations that are near the detection limit; conventional PCR has at least as low a detection threshold as other methods that rely on the presence of live bacterial cells . Regardless, although scouting based on visual symptoms clearly captured most cases of PD in the current study, some caution should be used when trying to diagnose early disease stages to ensure that vines are not needlessly removed. There is no cure for grapevines once infected with X. fastidiosa, except for recovery that can occur in some overwintering vines . The virulent nature of X. fastidiosa in grapevines, and the corresponding high mortalityrate for early season infections, increases the potential value of any cultural practices that can cure vines of infection. Moreover, new vines replanted into established vineyards generally take longer to develop compared to vines planted in newly developed vineyards, potentially due to vine-to-vine competition for resources that limits growth of replacement vines. As a result, vines replanted in mature vineyards may never reach full productivity . Thus, management practices that speed the regeneration of healthy, fully developed, and productive vines may reduce the economic loss caused by PD . A multinomial logistic regression showed significant differences in the relative frequency of different grapevine growth outcomes between the two restoration methods .

Chip-budded vines showed significantly lower frequency of strong growth and significantly higher frequencies of vines with developing growth and, especially, of no growth . Nearly 30% of chip-budded vines showed no growth in the following season, compared to 0% of vines on which established shoots were trained. These results indicate that training newly produced shoots from the remaining section of the scion was more likely to result in positive regrowth outcomes. As a result, of the two methods we evaluated, training of shoots that emerge from the scion of a severely pruned trunk is recommended for restoring growth. However, it is important to note that the current study did not estimate the amount of time required for severely pruned vines to return to full productivity. Moreover, the study did not include mature vines, in which growth responses may differ from young vines. Additional studies may be needed to quantify vine yield, and perhaps fruit quality, in severely pruned vines over multiple seasons. The usefulness of pruning for disease management depends on its ability to clear plants of pathogen infection . A comparison of symptom prevalence among severely pruned and control vines from different disease severity categories showed significant effects of the number of years after pruning , pruning treatment , and initial disease symptom category . The analysis also showed significant interactions between year and treatment and between treatment and symptom category , a non-significant interaction between year and symptom category , and a marginally significant three-way interaction . Overall, more vines had symptoms in the second year compared to the first , and there was a higher prevalence of returning symptom in vines from higher initial disease categories . Severe pruning showed an apparent benefit to reducing symptoms of PD after the first year, but this effect weakened substantially by the second year, with no differences for category 1 or 3 vines, and a slightly lower disease prevalence for severely pruned category 2 vines . A survival analysis of severely pruned category 3 vines showed a significant difference in the rate of symptom return among plots . All vines in plots 1 to 3 had symptoms by autumn 2000, two years after pruning . In plots 4 and 5, more than 80% of vines showed symptoms after three years. Only plot 6 showed markedly lower disease prevalence; in plot 6, ~70% and 50% of severely pruned category 3 vines showed no symptoms after two and four years, respectively, versus ~36% of control vines overall, after two years. It is important to note that at the time of this study, disease pressure may not fully explain the return of symptoms in severely pruned vines.

Interspecific interactions between bee species can also increase honey bee efficiency

We also determine whether hedgerow presence affects wild bee abundance and richness in sunflower fields, and if this, in turn, translates into increased sunflower seed set.Field sites were located in Yolo County, an intensively-farmed agricultural region of California’s Central Valley that contains a mixture of conventionally managed row and orchard crops. The majority of natural and semi-natural habitat in the county is concentrated around the borders of agricultural lands and not embedded within them . We sampled 18 sunflower fields between June and July . Half of the fields were adjacent to bare or weedy edges , and half were adjacent to hedgerows . Sites were paired based on the timing of the sunflower bloom, the sunflower variety , and landscape context. Field pairs were a minimum of 900 m apart to maintain independence . To avoid contamination of varieties, sunflower fields are moved every year; therefore no field was sampled in multiple years although two fields were adjacent to the same hedgerow in different years.In Yolo Co., acreage planted in sunflower has increased by over 55% during the past 5 years . It is the 8th most-planted crop in the region, grossing nearly $28 million USD in 2013 . It is produced mainly for hybrid seed, which is then grown for oilseed or confection. While sunflower is native to North America, the breeding system of sunflower grown for hybrid seed has been altered to be artificially gynodioecious, with separate male-fertile plants and male-sterile plants. For hybrid seed production, rows of male plants are interspersed with rows of female plants. Wild bees predominantly visit male plants to collect pollen for nest provisioning . Although honey bees visit both male and female plants,grow bag gardening workers typically either collect nectar from female plants or pollen from male plants which limits cross pollination events .

Honey bee movement between pollen and nectar producing rows of sunflower is often spurred by interference interactions with wild bees. When a wild bee and honey bee meet on a sunflower head, one or both fly to different sunflower heads or rows . These interactions that increase pollen flow between rows also increase honey bee per visit efficiency, therefore have great potential to heighten seed set . Honey bees were stocked at an average rate of approximately 100 hives per field, or 1.5 hives per acre . We did not evaluate pest management because sunflower fields managed by different companies used similar practices. For example, all companies used pre-emergent herbicides prior to planting and seeds were treated with insecticides and either a fungicide or nematicide. Other management practices, including fertilization, tillage, row width and ratio of male to female rows, are also similar between companies , although irrigation practices vary by field, with the majority using furrow irrigation.To quantify the landscape surrounding each site we created 18 land use categorizations . We then hand digitized National Agriculture Imagery Program within a 1 km buffer around study sites in ArcGIS 10.1 . To determine landscape effects on wild bee populations in sunflower, we examined the proportion of habitat within each buffer that could provide resources to wild bees . This included both natural habitats and altered habitats . Potential pollinator habitat around our study sites varied from 1 to 40%, with a median of 5% . Control and hedgerow sites were paired by landscape context to minimize differences.We established two 200 m transects within each field, perpendicular to the field edge or hedgerow and 100 m apart . We netted and observed pollinators at four distances along these transects: 10, 50, 100 and 200 m from the edge.

We varied the starting sampling location within fields and edges at each study site to reduce conflation of distance with temporal variation in bee foraging behavior, which peaks in the morning and late afternoon . Each site was sampled once, during peak bloom , on a clear day with wind speeds <2.5 m/s and temperatures >18  C between 08:00 h and 14:00 h. We visually observed visitation for 2 min each in two malefertile and two male-sterile 2 1 m plots at each distance. Within hedgerows and edges we haphazardly sampled floral visitors for 2 min in eighth plots containing floral blooms. Only insects that contacted the anthers or stigmas were recorded as floral visitors. We also recorded non-bee visits; these accounted for <1% of all visits and were, for simplicity, excluded from analyses. We were unable to identify bees to species in visual observations; therefore we classified them to citizen science categories from Kremen et al. of all records. We did not include feral Apis in our wild bee categorization because we were unable to distinguish them from managed Apis.Sunflower specialists are more effective pollinators of sunflower than generalist species . We therefore also investigated whether sunflower specialists were more abundant in hedgerow or control field edges using an independent data set from 26 hedgerows and 21 control edges in Yolo Co. . Floral visitors were netted for 1 h in hedgerows and control edges during 4–5 sample rounds between April and August in 2012–2013. We queried this specimen database for sunflower specialist bees, which we defined as primary oligoleges . To assess whether the amount of nearby sunflower in the landscape impacted sunflower specialist presence in field edges in the independent dataset, we constructed 1 km buffers around sites in ArcGIS 10.4 and recorded the proportion of sunflower fields around each site using pesticide spray records , which identify which crop is grown on each parcel, and the California crop improvement sunflower isolation map .We used a chao estimator to evaluate species richness within sites in the R package vegan . To determine the impact of hedgerow presence, field location , and surrounding pollinator habitat in the landscape on wild bee species richness and abundance we used general linear models with Poisson and negative binomial distributions respectively in the R package lme4 .

Both models included an interaction between hedgerow presence and field location. We used raw species richness because we only sampled each site once and some sites contained too few individuals for estimation or rarefaction . We also assessed factors influencing sunflower visitation rates by honey bees and wild bees. Hedgerow presence, distance from hedgerow, and their interaction, potential pollinator habitat and sunflower sex were independent variables. In species richness, abundance and visitation models, site nested within pair was included as a random effect. We evaluated the differences between the community of bees in control edges, hedgerows and crop fields using a per MANOVA on their Chao1 dissimilarities in the R package vegan . We then determined whether male and female sunflower specialist bees utilized hedgerows or control field edges using the independent data set . We modeled counts of bees as the dependent variable with a Poisson distribution in the R package lme4 . Hedgerow presence, proportion of sunflower and potential pollinator habitat within a 1 km radius, bee specialization on sunflower, bee sex and an interaction between specialization and hedgerow presence were the independent variables. Site nested within pair was included as a random effect. To determine which factors impacted sunflower seed set, we used negative binomial generalized linear models in the R package lme4 that accounted for over dispersion in the seeddata . We examined the effect of wild bee abundance and richness on seed set from net and visitation data separately. We used raw species richness because some site distance combinations contained too few individuals for estimation or rarefaction . In all models, sunflower seed set was the dependent variable. In the model for netted bees, independent variables were hedgerow presence, wild bee abundance, wild bee species richness, sunflower company, distance into the field from the edge, and an interaction between netted wild bee abundance and honey bee visitation . For the model including visitation rates, additional explanatory variables included aggregate wild bee visitation to male-fertile and male-sterile flowers, honey bee visitation, and an interaction term between wild bee visitation and honey bee visitation. Site nested within pair was included as a random effect in both models. All continuous variables were scaled /sd. We checked all variables for collinearity , and no collinear variables were included in any model. For example, sunflower head size was correlated with variety. However, varieties were specific to sunflower company, so only sunflower company was retained in the model.Measuring the levels of ecosystem services derived from field edge habitat management in a variety of contexts is critical to demonstrating their efficacy and flexibility. If services are highly variable over time or from site to site, costs may outweigh the benefits and limit the adoption of diversification practices . Although other studies have found that field-edge diversification increase pollinator populations both in edges and fields and enhance pollination services to crops in adjacent fields ,plastic grow bag we did not detect any differences in rates of seed set in sunflower fields adjacent to hedgerow or control edges.

Wild bee richness and an interaction between wild bee visitation and managed honey bee visitation, however, positively impacted seed set; yet these factors were not influenced by hedgerow presence. Proportion of pollinator habitat in the surrounding landscape did not influence the bee community visiting sunflower, despite a large body of evidence supporting strong positive landscape effects . We did find higher numbers of sunflower specialist bees in hedgerows than in control sites. Based on these findings, we conclude that sunflower in not a good candidate crop for field edge enhancements, at least in our study region, although they exhibit potential for supporting populations of sunflower pollinating bees. We detected distinct differences in community composition of wild bees present in edges versus fields. This difference was likely driven by the fact that the dominant bee species found within fields, sunflower specialists, were either rare visitors to or absent from both hedgerow and control edge habitats. We only sampled each site once, therefore increased sampling could lead to more convergence or divergence between bee communities in these habitats. There can be significant overlap between species found in MFC fields and adjacent hedgerows , however species composition in hedgerows has also been shown to more closely resemble bee communities in forest habitat than adjacent crop fields . One factor likely driving the differences in species composition in our study region is the absence of sunflower planted within hedgerows due to concerns about genetic contamination of sunflower crop varieties. Because female sunflower specialists collect only sunflower pollen to provision their nests, they may not be attracted to the resources in hedgerows during the sunflower bloom period, instead being drawn into fields . Nevertheless, assessment of the independent dataset indicated that hedgerows provide important floral resources to sunflower specialist bees, especially males. Male sunflower specialists have been observed investigating honey bees as potential mates, which increases honey bee movement from male-fertile to male-sterile sunflowers and increases their pollination efficacy . Male bees, therefore, likely contribute to the interactive effect between wild bee richness and honey bees on rates of seed set. We found a slight positive effect of wild bee species richness on seed set rates, indicating that a higher number of bee species benefits pollination function in sunflower. Functional complementarity between species can enhance fruit and seed production in a variety of crops . Bee foraging behavior and bee body size can influence within inflorescence foraging, leading to more complete pollination in a single flower . Bee foraging activity can also be affected by preferences for particular weather conditions , temperatures , or preferences for floral phenology leading to temporal complementarity. In almonds, wild bee presence increases the likelihood that honey bees will move between different rows, which leads to higher pollen tube initiation and subsequent fruit set . Both niche complementarity and interspecific interactions likely underlie the positive relationship we detected between richness and seed set . In agreement with past findings , we detected an interactive effect between wild bee and honey bee visitation on sunflower seed set. We did not, however, detect any main effects of wild bee and honey bee visitation, despite strong evidence that wild bees positively increase seed set regardless of honey bee abundance . In order to evaluate the direct contribution of wild bees, other studies have estimated the contribution of wild and honey bee visitation to seed set separately .

It is thus possible that other unknown chemical traits might also affect larval performance

The physical properties of different cultivars did not seem to affect the fly’s oviposition. The percentage of eggs that developed to adults decreased with the increasing egg density per gram of fruit probably due to intra-specific competition, and this was further confirmed by manipulating the egg density and using the same ‘Bing’ cherry cultivar as the tested host. Females preferred larger fruit for oviposition, which is consistent with the density-dependent survival as the large fruit support higher numbers of fly larvae per fruit. It is well known that many fruit flies employ a variety of fruit characters to assess host quality and tend to be more attracted to larger fruits. Female D. suzukii appears to be able to assess host quality based on fruit size, and this behavior would likely increase foraging efficiency per unit time. Though we recovered very low numbers of D. suzukii from damaged citrus fruits, our laboratory study showed the fly can oviposit into and develop from freshly damaged or rotting navel oranges. Kaçar et al. showed that D. suzukii overwinter in citrus, surviving 3–4 months when fresh oranges were provided as adult food or ovipositional medium, and field-emerged adults from soil-buried pupae could produce and oviposit viable eggs on halved mandarin fruit. Thus, citrus fruit likely play an important role as reservoirs in sustaining the fly populations during San Joaquin Valley winter seasons,square black flower bucket wholesale and in the spring, those populations may migrate into early season crops, such as cherries. We did not observe grape infestation in our field collections, and our laboratory trials showed a low survival rate of D. suzukii offspring on grapes when compared to other fruits .

The oviposition susceptibility and offspring survival could vary among varieties or cultivars due to the variations in skin hardness and chemical properties. For example, Ioriatti et al. demonstrated that oviposition increased consistently as the skin hardness of the grape decreased. Chemical properties, such as sugar content and acidity levels, may play a role in host susceptibility. In the current study, we found that although table grapes had a tougher skin than raisin or wine grape cultivars tested , females were able to lay eggs into all three types of grapes, often through the fruit surface or near the petiole . The sugar levels of all tested grapes were either equal to or considerably higher than other fruits tested. We also found that tartaric acid concentration negatively affected the fly’s developmental performance. Still, about 20% eggs successfully developed to adults in the diet mixed with the highest tartaric acid, whereas only 4.5% eggs developed from the wine grape cultivar tested. Overall, our results are consistent with other reported studies that grapes are not good reproductive hosts for D. suzukii.California’s San Joaquin Valley is one of the world’s most important fruit production regions, with a diverse agricultural landscape that can consist of a mosaic of cultivated and unmanaged host fruit crops. Such diverse landscapes result in the inevitable presence of D. suzukii populations that represent a difficult challenge for the management of this polyphagous pest.

We showed that only the early seasonal fruits, such as cherries, seem to be at greatest risk to D. suzukiiMany of other later seasonal fruits are not as vulnerable to this pest, because either their intact skin reduces oviposition, they ripen during a period of low D. suzukii abundance, or their flesh has chemical attributes that retard survival. However, some of these alternative hosts—such as citrus and damaged, unharvested stone fruit—may act as shelters for overwintering populations and provide sources for early populations moving into the more susceptible crops. Consequently, area-wide management strategies may need to consider fruit sanitation to lessen overwintering populations, suppressing fall and winter populations by releasing natural enemies, and reducing pest pressure in susceptible crops through ‘border-sprays’ and/or ‘mass trapping’ to kill adults before they move into the vulnerable crop. Alternative and sustainable area-wide management strategies such as biological control are highly desirable to naturally regulate the fly population, especially in uncultivated habitats. An understanding of the temporal and spatial dynamics of the fly populations would be of aid in the optimal timing of the future release of biological control agents to reduce the source populations in the agricultural landscape. Previous studies on natural competence in X. fastidiosa were based on a few strains from a single subspecies , although recombination among strains of other subspecies has been described . On testing natural competence in 13 different strains, almost ubiquitous natural competence ability was detected. The frequency of recombination varied among strains, even for a single genomic region , as in other naturally competent bacteria . Flanking region DNA similarity of the strains was not correlated with the recombination frequency, but most of the strains within a subspecies had identical flanking regions.

A clearer understanding on the rate of recombination and homology between recombining DNA could have been obtained by using donor DNA containing a different level of similarity with the recipient strain at a given recombination region. However, this was not performed in this study. Even if differences in recombination frequency, especially between strains of different subspecies, could be due to differences in homology between donor and recipient DNA sequences or differences in growth rates that showed positive correlation with recombination frequency, these parameters did not explain the non-competency of two strains that had average growth and similar sequence homology compared with the competent strains. Also, since growth of the strains was measured by OD and strains appeared to differ in the rate of cell-to-cell attachment, growth values could have been biased, especially for the strains that showed high rates of precipitation . This was not further investigated, as it was beyond the scope of this study. On further testing of other biological traits, twitching motility was significantly correlated with recombination frequency. Strain WM1-1 had the highest recombination frequency and showed highest motility among strains, while the two non-competent strains were nonmotile. Positive correlation between recombination frequency and twitching motility was also suggested in our previous study using different media components . Since components of type IV pili are involved in both natural competence and twitching motility in several naturally competent gram-negative bacteria , including X. fastidiosa ,plastic square flower bucket the activity of type IV pili could govern both of these phenomena. On further analysis of competence and pili genes, defective pili genes were detected in the noncompetent strains. One of the defective proteins detected was PilQ , a member of the secret in family that forms the secret in pore of the outer membrane and is involved in type IV pili biogenesis and importing extracellular DNA into the periplasmic space . Previous studies in X. fastidiosa have demonstrated that pilQ mutants are nonmotile and noncompetent . Hence, we predict that the insertion in the pilQ coding region is responsible for the lack of twitching and natural competence, as BBI64 is unable to secrete the type IV pili. The lack of type IV pili was confirmed by TEM imaging. Motility has been described as a major virulence trait for X. fastidiosa . BBI64 has no motility and WM1-1 has the highest motility in this study. Consistent with the critical role of twitching in virulence, BBI64 had reduced virulence while WM1-1 was highly virulent . A further observation supporting the correlation between twitching and natural competence was the fact that the Fetzer strain showed recombination in this study, while a mutant in the polygalacturonase gene pglA of this strain did not . On closer examination, Fetzer is motile while the pglA mutant is not . Sequence data showed that the pglA mutant had an insertion in pilM, a type IV pili biogenesis gene that was shown to be involved in twitching motility of Acidovorax avenae in a previous study , most probably causing the lack of movement in this strain. Additional factors could be involved in causing differences in natural competence of X. fastidiosa strains. X. fastidiosa genomes contain high levels of phage and phage-like regions , and natural competence could be a mechanism to help cells eliminate new integration of these regions by recombining the homologous DNA without phage sequences, as suggested by a recent study . Other studies have reported restriction-modification systems limiting transformation frequency .

In this study, although all donor plasmids were extracted from an Escherichia coli strain expressing X. fastidiosa DNA methyl transferase , it is possible that different strains, especially from different genetic backgrounds, possess different forms of R-M systems, which could lead to differences in the amount of DNA available for recombination, causing differences in recombination frequency. In this regard, a previous study has reported the inability of a plasmid isolated from a citrus-infecting strain to transform a grape strain , suggesting existence of specific recognition mechanisms to differentiate DNA from self or foreign sources. Sequence analysis and annotation of the X. fastidiosa Temecula1 genome predicts at least four different types of R-M systems . Future studies focused on these specific topics could explain the differences in recombination frequencies observed among X. fastidiosa strains.Differences in recombination frequencies based on genomic positions was previously reported in Ralstonia solanacearum , with positions containing recombination hot spots showing the highest frequency . In this study, higher recombination frequency was observed for pKLN61, a plasmid that recombines in the region of rpfF gene, a diffusible signaling factor involved in cell-to-cell communication of X. fastidiosa , compared with pAX1.Cm, which recombines at a neutral site , and pMOPB-Km and pMSRA-Km, which recombine at regions whose functions are being characterized. Differences in the length of the homologous flanking region and nonhomologous insert have been found to influence recombination frequency in a previous study . However, the upstream and downstream flanking region length was higher in pAX1.Cm , pMOPB-Km , and pMSRA-Km than for pKLN61 . The length of the nonhomologous insert between the homologous flanking regions was similar and the size of the plasmids is also comparable . Moreover, flanking region DNA sequence identity between the donor plasmids and recipient strains at these positions was also similar . This suggests that the difference in recombination frequency at different genomic position is not associated with the characteristics of plasmid regions, and it remains to be determined if this difference holds any evolutionary significance. Natural competence has been proposed to bring adaptive changes to the recipient bacteria, such as repair of damaged DNA and generation of genetic diversity that can lead to adaptation . For the generationof adapted strains, recombining regions should come from a more successful and genetically distinct donor . This could be possible when closely related but genetically different strains of a same species coexist in a single habitat. Detection of IHR in X. fastidiosa by MLST/MLSA studies supported this possibility. In fact, these studies proposed IHR leading to plant host shift of X. fastidiosa in citrus , mulberry , and blueberry . Moreover, mixed infection of the two subspecies have been suggested by previous studies. For example, almond leaf scorch strains isolated from the same orchard were found to be genetically different and were grouped into two different subspecies, i.e., subsp. fastidiosa and multiplex . Infection of a plum tree showing leaf scorch symptoms by subsp. multiplex and subsp. pauca strains was also reported in a recent study . Results of this and previous studies demonstrated that certain plants serve as hosts for strains from multiple subspecies. In addition, vectors of X. fastidiosa are found to be distributed worldwide in both temperate and tropical climates and, unlike with plant hosts, exhibit no specificity for pathogen genotype . In fact, a species of the vector was able to transmit four subspecies of X. fastidiosa . All these observations suggest that strains belonging to different subspecies may coexist within the same habitat , providing opportunities for recombination. Although IHR was detected between subspecies when whole genomes of the donor and recipient were mixed, recombinants did not differ significantly with the parent in virulence phenotypes, suggesting that recombination did not bring phenotypic changes. On analyzing the flanking homologous region of recombination, 0.7- to 4-kb regions were detected to have recombined, but the size could be greater, as up to 80 kb has been demonstrated to recombine by natural competence in R. solanacearum, with the recombinant strain showing increased virulence .

Early to mid second instar thrips show limited abdomen distention and have an overall pearlescent hue

Many hypothesize that because thrips feed with a punch and suck method, rather than direct chewing and mastication of leaf tissues, they do not receive toxic amounts of the Bt proteins . Alternatively, they may not possess the proper binding receptors for the Bt proteins tested to date and thus, no pore can be formed in the midgut lining and the Bt proteins are excreted . The literature indicates the latter hypothesis is more likely based on findings from life table parameters where development, fecundity, and adult longevity or relative abundance are not significantly different from thrips reared on Bt positive versus Bt negative corn, cotton, or potato plants. The aforementioned studies were not specifically looking at Bt effects on thrips nor were the Bti toxins tested here involved in previous studies involving thrips. The combinations of proteins used in thisstudy were, to date, unique pairings with thrips. It is indeed possible that there are no Bt endotoxins currently available that cause mortality to Thysanoptera. The LC50 with strain GHA was 8.61 x 104 conidia/ ml and was two orders of magnitude lower than for the other five B. bassiana strains tested . GHA also gave the only statistically valid dose-response values in probit analysis, and provided the only data that fit the probit model. The other B. bassiana strains failed to provide a linear relationship based on their p-values , i.e. the probit regression lines were of poor quality, except for GHA.Therefore, data were evaluated based on line slopes as is commonly seen in the scientific literature with other biological agents where data lines are not straight and do not fit the model .

Strains 1741ss, SFBb1, S44ss, and NI1ss showed a flat dose-response between concentrations, did not fit the model,flower harvest buckets and LC50’sranged from 2.7 x 106 – 9.6 x 108 . Assessment of Beauveria strain while adjusting for concentration, in both Logrank and Wilcoxon tests showed that strain and concentration had a highly significant effect on the infection rate. Multiple comparisons for the Logrank test to assess the strain effect while adjusting for the concentration differences showed that strains 1741ss, S44ss, 3769ss, and NI1ss infection rates were not distinct from one another. Strain GHA and SFBb1 had infection rates different from each other as well, and GHA had the fastest infection rate and SFBb1 showed the slowest kill rate . The Survival Distribution Function analysis coupled with the probit analysis clearly shows that GHA would be the best strain choice for citrus thrips control. Results with avocado thrips. The LC50 for strain GHA was 2.2 x 106 conidia / ml and was similar to that obtained with the other five B. bassiana strains tested . Again, because a strong linear response was not observed, the performance between strains was rated based upon the LC50 and relative linearity of the response. Based on overlap of confidence intervals, there were no significant differences between any of the strain LC50’s or LC95’s . Assessment of Beauveria strains while adjusting for the concentration, using both Log-rank and Wilcoxon analysis showed that strain did not have an effect on the infection rate. The multiple comparisons for the Log-rank test to assess the strain effect while adjusting for the concentration differences showed infection rates for all 5 strains were not distinct from one another . The Survival Distribution Function analysis coupled with probit analysis indicated there was no one best strain to select for avocado thrips management.

Citrus thrips were more susceptible to Beauveria than avocado thrips; citrus thrips LC values were much lower for the most active strain, GHA, indicating that significantly lower dosages of strain GHA were required to infect and kill citrus thrips compared with avocado thrips. The overall survival analysis results showed a similar pattern to the results of the probit analysis; GHA had the fastest infection rate and SFBb1 had the slowest rate . Infection rates for the other threestrain’s fit in between the rates for GHA and SFBb1, and 1741ss, S44ss, 3769ss, and NI1ss infection rates were not separable. This low dosage association and having the fastest infection rate suggest GHA is the best candidate for field-testing among the strains examined. Except for the worst performing strain, SFBb1, the performance of all of the strains with avocado thrips were similar. The LC50 value for citrus thrips was 8.6 x 104 conidia/ml, which may suggest economical feasibility in some cases, e.g., for use on organic products. The maximum recommended field application rate is 5.0 x 1012 conidia/ha. Therefore, 8.6 x 1011 conidia/ha of GHA is needed based on the estimated LC50 of 86 conidia/µl and this amount is reasonable to obtain in a field setting. Conducting the same analysis for avocado thrips control using GHA, with an LC50 of 2.2 x 106 , 2.2 x 1013 conidia/ha would be required. This is 4.4 times greater than the standard field use rate of GHA. We hypothesize that differences in susceptibility between citrus and avocado thrips may be due to the different habitats in which they evolved. Citrus thrips are adapted to hot and dry environments and thus, they are less likely to have evolved natural tolerance to fungi, whereas, avocado thrips thrive in a very wet environment where exposure to fungi is more likely. The differences may be due to different habitat adaptations and the different origins of the two thrips species . We find it interesting that two congenerics have such widely different habitat preferences and this may explain differences in fungal tolerance.

Differences were seen when citrus thrips and avocado thrips were placed on leaves of their associated host plants, then placed separately in sealed zip-lock bags where the moisture that condensed in the bags was lethal to citrus thrips but not to avocado thrips. Thus, it is possible that avocado thrips, due to their adaptation to living in cool and wet climates , have a higher tolerance to fungal pathogens, as they may encounter them more frequently than citrus thrips, which prefer a hot and drier climate . Many researchers have investigated alternatives to traditional insecticides such as biopesticides, i.e. natural or organismal methods of controlling pest populations. The utilization of entomopathogens against thrips is not a new concept; entomopathogenic fungi, such as, Metarhizium anisopliaeSorokin , Neozygites parvispora Remaudière & Keller , Verticillium lecaniiViegas , and Paecilomyces fumosoroseusBrown & Smith have also been used in laboratory and greenhouse trials with much success, whereas field trials have shown limited successes. However, various strains of B. bassiana have been shown to effectively control western flower thrips on greenhouse ornamentals and peppers , and several reports indicated that F. occidentalis, Thrips palmi Karny and T. tabaci Lindeman were successfully controlled under field or laboratory conditions . In conclusion, both citrus and avocado thrips can be infected by B. bassiana but high doses may be required, especially for avocado thrips. These high doses are difficult to obtain outside the laboratory and application of such doses would be costly. We believe B. bassiana is not a sufficiently effective alternative to traditional insecticides to warrant further study with avocado thrips,round flower buckets particularly because the commercially available strain GHA gave poor control on avocado thrips, but it may have potential against citrus thrips in an integrated pest management program. Further studies are warranted to determine if GHA could be used in field control of citrus thrips. Citrus thrips, Scirtothrips citri , has been recognized as a major pest of California citrus since the 1890s and is also known to scar mango fruits . Historically, high bush varieties of blueberries could only be grown in regions too cold for citrus production . However, breeding efforts to cross the northern high bush blueberries with several other Vaccinium species led to the development of heat-tolerant high bush blueberry varieties . This has enabled the establishment of a blueberry industry in the San Joaquin Valley, a region where both citrus and citrus thrips flourish . The known host range of citrus thrips has broadened and in recent years, they have become a significant pest of blueberries planted in the San Joaquin Valley of California . Citrus thrips feed on blueberry foliage during the middle and late portions of the season causing distorted, discolored, and stunted flush growth and poor development of fruiting wood required to obtain the subsequent crop.

Repeated pesticide applications of the few effective and registered pesticides to reduce thrips populations pose a concern regarding pesticide resistance management, and this issue is relevant not only to the blueberry industry but also for the 108,665 ha of California citrus which has experienced repeated documented cases of pesticide resistance in citrus thrips populations . Currently, there are no integrated pest management plans available for control of citrus thrips in blueberry, probably due to the recent nature of this crop-pest association. With a limited number of pesticides available for thrips control and the frequency of insecticide resistance shown by thrips, populations should be monitored carefully, treatments limited to populations of economic concern, and applications timed optimally . Appropriate cultural practices and conservation of natural enemies should be practiced in concert with the use of pesticides only on an as-needed basis. Understanding citrus thrips’ life history in the blueberry system to determine where and if susceptible stages could be exploited, is one of the first steps in the development of alternative methods to the use of traditional insecticides. In citrus, citrus thrips pupation occurs on the tree in cracks and in crevices, however, the majority of thrips drop as late second instars from trees to pupate in the upper layer of leaf litter below trees and move upward onto the plant after adult eclosion. Propupae and pupae are rarely seen, move only if disturbed, and do not feed. Pupation in the upper layers of the soil surface may create the ideal interface for control using the entomopathogenic fungus Beauveria bassianaVuillemin due to this vertical movement of the citrus thrips. However, blueberry plants have much different plant architecture than citrus trees and citrus thrips pupation behavior has yet to be studied on blueberries. In the U.S., pressure is increasing to move away from broad-spectrum insecticides and focus on alternative methods of control. Earlier work with B. bassiana determined that the commercially available strain, GHA , was the most effective of six strains tested in laboratory trials against citrus thrips . The goal of this study was to determine if this strain of B. bassiana could be utilized effectively against citrus thrips in California blueberry production. To achieve this objective, several factors of importance to fungal efficacy were evaluated before commencement of our field trial: 1) location of citrus thrips pupation in commercial blueberry plantings, 2) field sampling locations and methods, 3) fungal formulation and timing of application, and 4) density of product used and method of thrips infection. We then conducted a field trial evaluating the potential utility of the GHA strain of Beauveria bassiana in commercial blueberries for citrus thrips management as a possible alternative to the use of traditional insecticides. Citrus thrips were collected in Riverside County, Riverside, CA from wild laurel sumac, Malsoma laurina , a suspected major host for this species before citrus was introduced into the state . Thrips were collected via aspiration the morning of the bioassay and held in 15-dram plastic aspiration vials with a copper mesh screened lid. A small sumac leaf, just large enough to fit in the vial, was included to allow the insects to settle on the leaf and feed. In experiments where late second instar thrips were needed, i.e. thrips that were close to pupation, selected thrips were large and had darkened in color. Their abdomens appeared plump and the overall color of the thrips was a deep yellow with almost no opalescence. When adult females were used, selected females were of unknown age. Because of the complex arrangement and number of blueberry canes arising from the rhizome of commercial blueberry plants, we first evaluated movement of second instar citrus thrips on potted single cane blueberry plants in the laboratory. Known numbers of late second instar citrus thrips were released onto the leaves of potted blueberry plants in the lab.

The most difficult dimensions of that training are such basics as what is or is not a fruit or a vegetable

Past CDPS reports have shown that lower acculturation is associated with higher levels of intake of fruits and vegetables among Latinos.This raises the question if seasonal variation affects high and low acculturated Latinos differently. The modified 24-hour recall method used in the CDPS requires considerable effort and resources to implement correctly. A significant challenge exists in training “generic” commercial interviewers who generally do not have a nutrition background. Interviewers play a critical role in assisting respondents in assessing the number of servings correctly for different reported items . They also must aid respondents in deconstructing mixed dishes to determine the number of servings of the different components. Not all interviewers are able to master this equally and over the course of weeks of data collection there is degradation of knowledge due to a combination of rare occurrences of some food items and forgetfulness. As a result, significant effort is required for quality control monitoring of interviewers and periodic refresher training to make the CDPS method work. In implementing the CDPS method, the length of time used in one interview is dependent on 1) what the respondent understands, 2) how extensive or varied the respondent’s fruit and vegetable consumption is, and 3) the skill of the interviewer. A good interviewer could normally accomplish this task in five to seven minutes. Time, however, is an important factor in whether or not general health behavior surveys can afford to include multiple complexquestions on dietary intake,plastic planter pot especially in determining the number of servings of fruit and vegetables consumed.

A shorter form of the CDPS method that is less involved and less complex with regard to interviewer training would be a convenient substitute if it worked as well in producing population estimates. Such an approach, derived from the CDPS method and asking only three questions, is compared to the CDPS method as part of this study. The possible substitute method is called the “Short Form 3” or “SF3,” since it uses three short and direct questions . The study population is California adults, ages 18 years and older. Among these persons, those who self-identify as White, Latino, or African American and those who report that their total annual household income from all sources is $25,000 or less are oversampled. A dual frame design is used to locate these persons. The method of data collection is the same used in the CDPS, a telephone survey using computer-assisted telephone interviewing techniques. The main frame consists of all residential telephone numbers across the state. The second or supplemental frame consists of residential telephone numbers located in geographic areas with concentrations of Latino, African-American, and low-income households. Interviews completed in this supplemental frame are classified as “targeted,” since they are designed to maximize the chance of reaching the groups of interest. Sample sizes were designed to deliver an estimated sampling precision with a statistical power of 0.80 to discriminate with 95% confidence between groups if four-month seasons were defined and used . The combined four-month period was originally chosen a priori for design purposes. A single month is the minimum time frame used, since it was felt that the data might reveal seasons different than the a priori seasons described in the original proposal. The “seasons” were originally defined as summer , winter , and spring . To provide maximum flexibility in examining seasonal variation, independent samples were selected for each month of the year.

The samples were further stratified across two years for two reasons: 1) to smooth out any inter-year variation and 2) to expense data collection within the annual funding cap. Since analysis would be at the month level, the calculated sample size for any given month was divided between the month in Year 1 and the same month in Year 2. This is a rectangular sample design in that the number of interviews would be completed for each of the three race/ethnic groups per month for both the general population and for the low-income population. The original calculation based on the hypothetical four-month season was for 660 completed interviews per race/ethnic group per month. This is 165 interviews per month, each month divided over two years so that 83 interviews are collected for each of the four months in Year 01 and the same number per month in Year 02. This calculated number allows for discrimination down to 0.55 servings using a standard deviation from the 1999 CDPS of 3.5 servings for all groups for the hypothetical four-month season. Examining groups by combining all 12 months would have still greater discrimination . Due to the higher cost per case in obtaining low-income race/ethnic-specific interviews, the goal for all three low-income groups was set to be 400 interviews per hypothetical four-month season. At the same confidence level and power , 400 cases discriminates differences greater than 0.69 servings. For 12 months of data combined, an n of 1,200 cases will discriminate differences greater than 0.4 servings. This is a monthly sample size of 51 cases per month per year for each of the three low-income race/ethnic groups. Most CDPS data collected in the past cover the above so-called summer months of July through October. The general population survey has never been collected in the December through May period with the exception of the last two days in May during the very first CDPS in 1989. Some African-American over-sample cases have been collected as late as November .

The sample provides an identical snapshot of the California population and the sub-groups of interest inside of each month. It is important to note that an individual case may be used to satisfy different sample size objectives. For example, a low-income African American case located in the general population random-digit dial survey is used in a) the general population estimate, b) the low-income population estimate, and c) the African-American estimate. Individuals located through the targeted over-samples are only used for their group member estimates. The data collection instrument used in this study is the fruit and vegetable intake module of the CDPS. Also included are the five language-based acculturation questions asked of all Latino respondents . Descriptive, self-reported demographic data are collected to define gender, education, race/ethnicity, and income. The SF3 questions are implemented in half of the study population by random selection. Using the short form in only half of the sample allows for measuring and adjusting for any potential testing effect of placing these questions ahead of the CDPS module. Placing it after the module, however, would be counterproductive since the CDPS module walks each respondent through each meal on the previous day and details specific servings consumed. This would greatly influence the SF3 response toward a higher level of agreement with the module and result in overstating the agreement of the SF3 estimates. Data collection was conducted using CATI methods. Forty interviewers were trained, ten of whom were bilingual Spanish-English. Each month’s sample was administered so that 80% of the cases were collected during the first three weeks of the month and the balance of the cases, including hard-to-reach cases and any remaining refusal conversions, occurred in the final week of the month. The objective was to have the interviews spread as evenly as possible over the entire month and all cases in the sample for a given month completed within that month. If any month fell short of the target number of cases in Year 1, the difference was made up in the same month in Year 2. To accomplish this, sample management in Year 2 was very exacting. Each month’s cases were the result of an independently drawn, random sample of California households. Respondents were then randomly selected from among all eligible respondents in the household. Thus,30 litre plant pots this is a two-stage random sample design. Only the selected respondent would or could be interviewed. Interviews were conducted in either English or Spanish at the preference of the respondent. At least one subsequent refusalconversion attempt was made in households that refused to participate. At least nine contact attempts were made on each selected telephone number. Interviews were completed between November 1, 2000 and October 31, 2002. The average interview took 9.4 minutes to complete. This is just under the 10 minutes originally planned and budgeted. Of the 8,614 completed interviews, 1,249 were completed in Spanish. The overall response rate* was 26.5% for the general population survey, the refusal rate* was 5.6%.

These rates were computed by the data collection vendor based on their available disposition coding scheme. Inadequate tracking of disposition codes by the vendor for the targeted and low-income samples made response rate calculations unreliable for these groups. This makes all disposition codes suspect and may account for the lower than expected general population response rate. The data file of final cases was cleaned by the vendor and the fruit and vegetable codes added to the recorded fruit and vegetables in each data record. The data collection vendor’s final report is included in Appendix III. As done in the CDPS, interviewers entered the actual fruit, vegetables, salad ingredients, and the fruit and/or vegetable mixed dishes reported by respondents. These standardized entries were post processed by the data collection vendor using programming that read and converted these alpha entries to the numeric codes used by the CDPS. These codes are based on, although not identical to, USDA food codes. These codes had been updated during the course of this study using the 2001 CDPS data. This work was done by one of this study’s research assistants who was also a registered dietitian and was reviewed by the principal investigator in collaboration with Public Health Institute staff who works with the CDPS for the California Department of Health Services. The number of servings of fruit and vegetables was recorded by interviewers as whole numbers. Respondents reported a serving size as what is “usual” for them. All reports of a half a serving or greater were rounded up to the next whole number. For all amounts greater than one serving where the respondent reported more than the whole number, but less than an additional half serving , the number was rounded downward. The exception is when the amount reported was less than one-half serving. In this instance, the interviewer entered a zero. This is particularly true for such items as lettuce and tomato on a sandwich or on a taco . This is consistent with the CDPS. This study differs from the CDPS in one respect. The analysis of the CDPS data recodes the zero entries as quarter servings, while this study did not. It is the opinion of both the authors that the relative relationship among the groups studied and among months sampled remains unchanged when not recoding the zero entries, thus the data are still valid for purposes of this study’s objectives. An examination of the number of reported servings revealed, as expected, cases with unusually high numbers of servings. After cases with likely recording errors are dropped , it is generally accepted to top code outlier cases. Consuming a high number of servings of fruit and vegetables may not be unusual, particularly for vegetarians. However, to minimize the impact of these outlier cases on the computed mean values and variance calculations, it is typical to top code these cases to a determined value. Initially, weexplored computing outlier cutoff values using the same method employed by diet researchers at the National Cancer Institute in their work with National Health Interview Survey data.The method involves identifying the first and third quartile in the study’s data distribution. This is done independently for fruit and for vegetables after transforming the variable by using the square root of the number of servings. The maximum value for the total number of servings of fruit and vegetables combined is computed to be 20.43 servings, rounded down to 20 servings. However, the CDPS has, with experience of over a dozen years, top coded the number of servings of fruit and the number of servings of vegetables each at 10.0 servings. Following that convention and thus staying consistent with CDPS, all outlier cases for servings of fruit and for vegetables in this study were top coded at 10 servings. No case can thus exceed 20 servings of fruit and vegetables combined, and this is, coincidentally, the same number of servings computed earlier following the NCI method.

Consumers were instructed to sip bottled water between samples to cleanse their palates

Sixty berries per replication were then wrapped together in two layers of cheesecloth and squeezed with a hand press to obtain a composite juice sample. The juice was used to determine soluble solids concentration with a temperature-compensated handheld refractometer and expressed as a percentage. Twenty-one hundredths of an ounce of the same juice sample was used to determine titratable acidity with an automatic titrator and reported as a percentage of citric acid. Some samples that had a high viscosity were centrifuged with a super speed centri-fuge at 15,000 rpm for 5 minutes, in order to get liquid juice for soluble solids concentration and titratable acidity measurements . The ratio of soluble solids concentration to titratable acidity was calculated.Antioxidant capacity was measured in the 2005 and 2007 seasons. Eighteen hundredths of an ounce of berries per replication was used to determine the level of antioxidants by the DPPH free-radical method . Samples were extracted in methanol to assure a good phenolic representation, homogenized using a polytron and centrifuged for 25 minutes. The supernatant was analyzed against the standard, Trolox, a water-soluble vitamin E analogue, and reported in micromoles Trolox equivalents per gram of fresh tissue .An in-store consumer test was conducted on ‘Jewel’, ‘O’Neal’ and ‘Star’ blueberry cultivars in 2006, and on the six blueberry cultivars studied in 2007, using methods described previously . The fruit samples were held for 2 days after harvest at 32°F prior to tasting. One hundred consumers who eat fresh blueberries, representing a diverse combination of ages,black plastic nursery pots ethnic groups and genders, were surveyed in a major supermarket in Fresno County. Each consumer was presented with a sample of each blueberry cultivar in random order at room temperature, 68°F .

A sample consisted of three fresh whole blueberries presented in a 1-ounce soufflé cup labeled with a three-digit code. At the supermarket, the samples were prepared in the produce room out of sight from the testing area. For each sample, the consumer was asked to taste it, and then asked to indicate which statement best described how they felt about the sample on a 9-point hedonic scale . Consumer acceptance was measured as both degree of liking and percentage acceptance, which was calculated as the number of consumers liking the sample divided by the total number of consumers within that sample . In a similar manner, the percentage of consumers disliking and neither liking nor disliking the sample was calculated.Agricultural managed aquifer recharge is a recharge technique for groundwater replenishment, in which farmland is flooded during the winter using excess surface water in order to recharge the underlying aquifer . In California, for example, Ag-MAR is currently being implemented as part of the efforts to mitigate California’s chronic groundwater overdraft . Ag-MAR poses several risks for agricultural fields and groundwater that may influence its future adoption. This includes crop tolerance to flooding, soil aeration, bio-geochemical transformations, long-term impact on soil texture, leaching of pesticides and fertilizers to groundwater, and potential greenhouse gas emissions. Some of these issues have been addressed in recent studies of Ag-MAR, including soil suitability guidelines , nitrate leaching to groundwater , crop suitability and soil aeration . In the current study, we focused solely on the question of “how long can water be applied for Ag-MAR with minimal crop damage?”, while ignoring some of the above-mentioned challenges involving Ag-MAR implementation. Preferably, Ag-MAR flooding is done during fallow or dormant periods, when crop damage is potentially minimal, so agricultural lands can serve as spreading basins for groundwater recharge. Root zone residence time is defined as the duration that the root-zone can remain saturated during Ag-MAR without crop damage . RZRT is a crucial factor in Ag-MAR, as long periods of saturated conditions in the root-zone can damage crops due to oxygen deficiency or complete depletion of oxygen , which ultimately may result in yield loss . However, flood tolerance among crops varies considerably due to biotic and abiotic conditions , therefore only appropriate crops under specific conditions may be suitable for Ag-MAR application.

For example, Dokoozlian et al. have found that grapevine during dormancy can be flooded for 32 days each year without yield loss. Dahlke et al. recently investigated the effect of different Ag-MAR flooding schemes on established alfalfa fields. Results suggest a minimal effect on yield when dormant alfalfa fields on highly permeable soils are subject to winter flooding. On the other hand, some crops are sensitive even to short-period flooding. Kiwi vines for example, are highly sensitive to root anoxia with reported yield lost and vines death due to extreme rainfalls and/or shallow groundwater levels . In a study on peach trees, flood cycles of 12 h per day with 5 cm ponding, applied for two months, resulted in branches with lower diameter and length growth, as well as smaller, low-quality, fruits, compared to the control trees . The above examples demonstrate the need for an RZRT planning tool that can estimate Ag-MAR flood duration with minimal crop damage. Usually, when Ag-MAR water application starts, aeration of the rootzone will be quickly suppressed by a water-layer covering the soil surface, as it prevents oxygen transport to the root-zone in the gas phase. When water application ceases, re-aeration of the root-zone will depend on the soil’s drainage rate that controls the formation of connected air pores between the root-zone and atmosphere . Hence, proper estimation of the planned flood duration during Ag-MAR requires prior knowledge of both crop characteristics and soil texture. Only a few attempts for estimating RZRT during Ag-MAR were made, as Ag-MAR is a relatively new MAR technique. O’Geen et al. used a fuzzy logic approach to rate the RZRT during Ag-MAR, based on the harmonic mean of the saturated hydraulic conductivity of all soil horizons, soil drainage class, and shrink-swell properties. Their RZRT rating was combined with other factors generating a Soil Agricultural Groundwater Banking Index . Flores-Lopez et al. proposed a root-zone model that includes crop type, soil properties, and recharge suitability to estimate water application, flooding duration, and the interval between water applications.

Their model was integrated with a Groundwater Recharge Assessment Tool to optimize Ag-MAR water application. Here,greenhouse pot we propose a simple model to estimate the planned water application during Ag-MAR based on the following parameters: soil texture; crop saturation tolerance; effective root-zone depth; and critical water content. The concept of critical water content was proposed by several authors as it indicates a percolation threshold where the gas transport path is blocked by pore-water, which results in gas diffusivity and permeability of practically zero. Hence, when the water content is either below or above this threshold, gaseous oxygen transport into the soil is blocked or opened, respectively . As opposed to the previous Ag-MAR models mentioned above, our proposed model is physically based and includes explicitly the soil water content, that is used to infer the soil aeration status. Yet, thanks to its simplicity, this model can be integrated easily into various existing Ag-MAR assessment tools such as SAGBI or GRAT . In the following, we first describe the theory of the model and the methods used to test the model performance. Next, we present the model predictions and compare them with observations and numerical simulations. Last, we present an example of how to calculate Ag-MAR water application duration and we discuss the applicability of the model and its limitations. Plant tolerance to flooding or the duration of flooding with minimal crop damage is a very challenging parameter to estimate. A tremendous diversity of tolerance exists, which depends on several factors: soil texture and chemistry; degree and duration of hypoxia/anoxia; soil microbe and pathogen status; vapor pressure deficit , and root-zone and air temperatures; plant species, age, stage and season of the year; and plant adaption as a result of prior climate and soil conditions . An estimate of crop tolerance to flooding of common perennial crops is provided in Table 3, which is an extended version of a previous survey . Annual crops were not included in Table 3 because it was assumed that these fields usually would be fallow during winter and spring when excess surface water is available for Ag-MAR . Waterlogging tolerance in most fruit trees is mainly determined by the root stock and not by the scion , where tolerance is higher during dormancy, but more prone to damage during bud break and growth . The plant tolerance scales in Table 3 have different definitions, as some authors use plant survival as the tolerance criterion, while others consider economical damage as the tolerance criterion; these differences are indicated in Table 3. We note that the data provided in Table 3 should be used with caution because most of it is based on expert opinion or experiments with seedlings, while very few waterlogged experiments were conducted with bearing fruit trees. The fit between the predicted and observed effective saturation ranges from poor to excellent , and generally the fit is better for the RZRT models that underwent calibration followed by Hw1 and H5w. Obviously, a better fit of the predicted and observed water contents will lead to a more accurate estimation of twap.

Therefore, when possible, it is recommended to use the proposed RZRT model with site-specific hydraulic parameters. This is demonstrated in the Yolo silt loam soil, where a reasonable fit was not feasible without the use of site-specific hydraulic parameters . Note that site-specific parameters can vary considerably for the parameters obtained from the NCSS database. This is especially notable for the Ks values which can vary by more than one order of magnitude . The reason for this discrepancy is attributed to the low spatial representation of each soil series in the NCSS database, which is based on few soil pedons that are not always a good representation of the soil series where the field data was collected. In some cases, even when the overall effective saturation fit is poor, it is possible to estimate twap accurately given that the fit is good at the range of Sc. This is demonstrated in the Harkey loam for the H5w-fit . Note that for all soils H1w performs better than H5w, supposedly not as expected, because Rosetta3 is a hierarchical PTF where the highest hierarchy should perform better than lower hierarchies . As noted above, this is because each of the H5w parameters in this study was based on only one soil pedon sample from a specific location , which in this study was less representative compared to the Hw1 parameters that are based on averaging by soil texture a large number of soil samples. The fit of the effective saturation between the proposed RZRT model and the numerical model HYDRUS-1D ranges from good to excellent , and in all cases, HYDRUS fits better to the RZRT model than to the observed data. This indicates that the deviations between the RZRT model predictions and the observed water content data are probably due to soil layering, soil heterogeneity, and preferential flow, which cannot be captured by simplified homogenous one dimensional flow models. Another explanation for the observed and modeled water content deviations could be an inappropriate setting of the models’ boundary conditions. This mainly refers to the assumption of free drainage at the bottom boundary because the top boundary was controlled during the experiments. The performance of the RZRT model with a hard pan layer above or below the effective root zone was compared to HYDURS simulations with similar settings . According to our limited test, the RZRT model with the harmonic mean Ks is preferred during the infiltration period. During the drainage period the arithmetic or the harmonic mean Ks are preferred when the hard pan layer is above or below the effective root zone, respectively. As expected, when the hard pan layer is far below the effective root zone there is no impact on the effective root zone by the hard pan. The total water applied calculated with Eq. and the harmonic mean Ks is almost identical to the HYDRUS results . This demonstrates the impact of a hard pan on deep percolation, as the total water amounts applied were reduced by more than half when the hard pan layer was close to the root zone.

Tree density was less than half of values reported for well-drained sites in Canada

Per tree production , basal area and biomass were similarly less than half of values reported for well-drained sites in Canada. ANPP was about one third of values reported for stands in Manitoba and one quarter of values reported for Larch forests in Central Siberia and Scots Pine forests in Finland . Differences in growth allometry between our Alaskan stands and those from Northern Manitoba provide some evidence that the low productivity of Alaskan stands may be due to moisture stress. Regional mean annual precipitation was 30% lower in our Alaskan sites than in the Manitoba sites, indicating that available soil moisture may be lower in our sites. Our Alaskan trees were significantly shorter and had less stem mass per unit increase in DBH than their Canadian relatives. In black and white spruce stands across Canada, reduced height and shoot growth has been linked to soil moisture deficits . In response to water stress, trees may grow more wood per unit height , apparently to decrease the potential for embolism in xylem during periods of moisture stress . Our observation of changes in allometry and its influence on biomass also agrees with the observation that black spruce in Alaska may allocate more C below ground where moisture appears to be more limiting . Moss biomass began to accumulate surprisingly early in succession as indicated by the large increases in Ceratodon spp. and Polytrichum spp. over the first 4 year of succession in the 1999 dry site. Composition shifted to feather moss dominance in both the mesic and dry mature sites.

Because feather moss lack water-conducting tissues, it was surprising that its production was similar between mesic and dry sites despite an order of magnitude difference in moss biomass pools . As a result, ANPP per unit biomass, or production efficiency,raspberry container size was drastically lower in the mesic site, which may indicate lower light or nutrient availability in this site where mosses are both densely packed and beneath a closed canopy. Alternatively, it may indicate that that there is more brown moss in the mesic stand. Due to cool soils and moist conditions, decomposition of senescent moss may be slower in the mesic stand than in the dry stand, resulting in more intact brown material. Our measurements of moss biomass pools in the mesic mature stand were on par with green plus brown biomass pools in a black spruce/feather moss community in Washington Creek, AK , and twice as large as estimates for a similar community in Canada where only the green biomass was sampled .The growth of the “critical zone” paradigm has added impetus to closer investigation of soil-plant atmosphere interactions in ecohydrology . This follows from work emphasizing the importance of vegetation in regulating the global terrestrial hydrological cycle, with transpiration being the dominant “green water” flux to the atmosphere compared to evaporation from soils and canopy interception in most environments . More locally, the role vegetation plays in partitioning precipitation into such “green water” fluxes and alternative “blue water” fluxes to groundwater and stream flow has increased interest in the feed backs between vegetation growth and soil development in different geographical environments . The emerging consequences of climatic warming to changes in vegetation characteristics and the implications of land use alterations add further momentum to the need to understand where plants get their water from, and how water is partitioned and recycled in soil-plant systems . Stable isotopes in soil water and plant stem water have been invaluable tools in elucidating ecohydrological interactions over the past decade . Earlier work by Ehleringer and Dawson explained the isotope content of xylem water in trees in terms of potential plant water sources. Building on that, Brooks et al. showed that the isotope characteristics of xylem water did not always correspond to bulk soil water sources as plant xylem water was fractionated and offset relative to the global meteoric water line compared to mobile soil water, groundwater and stream flow signatures.

This led to the “Two Water Worlds” hypothesis which speculated that plant water was drawn from a “pool” of water that was “ecohydrologically separated” from the sources of groundwater recharge and stream flow . Research at some sites has found similar patterns of ecohydrologic separation and suggested it may be a ubiquitous characteristic of plant-water systems . Others have found that differences between plant water and mobile water may be limited only to drier periods , or may be less evident in some soil-vegetation systems . Direct hypothesis testing of potential processes that may explain the difference between the isotopic composition of xylem water and that of potential water sources has been advanced by detailed experiments in controlled environments, often involving the use of Bayesian mixing models which assume all potential plant water sources have been sampled . However, as field data become increasingly available from critical zone studies, more exploratory, inferential approaches can be insightful in terms of quantifying the degree to which xylem water isotopes can or cannot be attributed to measured soil water sources . As this research field has progressed, it has become apparent that extraction of soil and plant waters for isotope analysis is beset with a number of methodological issues . Soil waters held under different tensions may have different isotopic characteristics: for example, freely moving water sampled by suction lysimeters often shows a much less marked evaporative fractionation signal than bulk soil waters dominated by less mobile storage extracted by cryogenic or equilibration methods . Such differences between extraction techniques may be exacerbated by soil characteristics, such as texture and organic content, which may in turn affect the degree to which water held under different tensions can mix . Similarly, sampling xylem and its resulting isotopic composition has been shown to be affected by methodology. It is usually assumed that methods such as cryogenic extraction isolate water held in xylem, when in fact water stored in other cells may be mobilized to “contaminate” the results .

Interpretation of plant-soil water relationships can also be complicated by processes in plants and soils that alter isotopic compositions independently. For example, the spatio-temporal isotopic composition of soil water can change dramatically in relation to precipitation inputs, evaporative losses, internal redistribution and phase changes between liquid and gaseous phases . Moreover, there is increasing evidence that plant physiological mechanisms may affect water cycling and the composition of xylem water . These include effects of mychorrizal interactions in plant roots that may result in exchange and fractionation of water entering the xylem stream . Research also indicates that as flow in xylem slows, diffusion and fractionation can occur , which may involve exchange with phloem cells . Finally, there is increasing evidence that water storage and release from non-xylem cells may sustain transpiration during dry periods or early in the day , also affecting xylem composition. Thus, there is a need to understand the different timescales involved in uptake processes in the rooting zone, residence times and mixing of water in different vegetation covers . There is also evidence of differences between how such factors affect water movement in angiosperms and gymnosperms, as well as species-specific differences . Clearly, these methodological issues will take some time to address; in the interim there is a need for cautious interpretation of emerging data from critical zone studies in order to improve our understanding.A striking feature of isotopic studies of soil-vegetation systems is a bias to lower and temperate latitudes,raspberry plant container with northern latitudes and cold environments being under-represented . Yet, northern environments present particular challenges and opportunities to further advance the growing body of knowledge about plant-soil water interactions. For example, the coupled seasonality of precipitation magnitude and vegetative water demand can be complicated by the seasonality of the precipitation phase. Cold season precipitation that accumulates as snow can replenish soil water in the spring and be available to plants months after deposition . Despite the lack of studies, these areas are experiencing some of the most rapid changes in climate and, as a result, vegetation . The effects of climatic warming on patterns of snow pack accumulation and melt can have particularly marked consequences for soil water replenishment and plant water availability, particularly at the start of the growing season . Despite the importance of northern environments, remoteness and harshness of environmental conditions result in logistical problems that constrain lengthy field studies and data collection . This study seeks to contribute to the growing body of knowledge about plant-soil water interactions by expanding the geographical representation of sites in cold northern environments. We report the findings of a coordinated project on xylem water isotopic data collection in the dominant soil – vegetation systems of five long-term experimental sites. Isotopic characteristics of soil water have previously been reported for all five sites; this used a comparative approach with, as far as possible, common sampling methods across the sites for a 12 month period . Here, we present xylem water isotopic composition data collected using common methods over the same time period encompassing the complete growing season, and then relate findings to soil water isotopic compositions. The study was conducted at five long-term experimental catchments across the boreal or mountainous regions of the northern latitudes .

The catchments were part of the VeWa project funded by the European Research Council investigating vegetation effects on water mixing and partitioning in high-latitude ecosystems . Previous inter-comparison work on this project has examined such issues as changing seasonality of vegetation-hydrology interactions , soil water storage and mixing , water ages and modelling the interactions between water storage, fluxes and ages .At each site, plants and surrounding soils were sampled concurrently for isotope analysis following a common sampling protocol . Depending on the nature of the soil cover, the maximum depth of sampling varied from -20 cm at BB to -70 cm at Dry Creek . Sampling took place at 5 cm intervals for Bruntlad Burn, Dorset, and Krycklan with two to five replicates for each depth. At Dry Creek, sampling was done at -10, -25, -45, and -70 cm with two to four replicates. Sampling depths at Wolf Creek varied between -2 and -40 cm with one to three replicates. Daily soil moisture data based on continuous soil moisture measurements at 10 or 15 cm soil depth were available for each soil water sampling location at Bruntland Burn, Dry CReek, Krycklan, and Wolf Creek. Only weekly manual soil moisture measurements were available for Dorset, and daily soil moisture data were derived from soil physical modelling . The volumetric soil moisture data were used to assess the hydrologic state on the sampling days. Plant samples from trees with a diameter > 30 cm were taken horizontally with increment borers at breast height . Retrieved plant xylem cores were directly placed in vials without bark and phloem. Shrub vegetation was sampled by clipping branches. These were immediately placed in vials after the bark was chipped off or left on . All vials were directly sealed with parafilm and immediately frozen until extraction was conducted at Boise State University, Boise, Idaho, USA. There were five replicates for each species and day at the sites in Bruntland Burn, Krycklan, Dorset. At Wolf Creek, the number of replicates varied between two and five and there were always four replicates for each sampling campaign at the Dry Creek sites. In total, 1160 xylem water samples were collected; 831 for angiosperms and 329 for gymnosperms . Dates of sample events varied at each site, but included the end of the growing season/senescence, pre-leaf out the following year, post leaf out, peak growing season and senescence . Precipitation was sampled daily or on an event basis at Bruntland Burn and Krycklan. Daily to fortnightly precipitation sampling was conducted at Dorset, Dry Creek, and Wolf Creek. Melt water was sampled from lysimeters at Krycklan, Dorset, Dry Creek and Wolf Creek during several snow melt events, while snowfall seldom occurred over the study year at Bruntland Burn . Various measures were taken to prevent evaporation of collected precipitation, including paraffin oil and water locks prior to transfer to the laboratory. The long-term groundwater signal was assessed at all sites, apart from Dorset, using several sampling campaigns of springs and wells tapping the saturated zone over the last few years . There were no nearby wells from which to sample the regional groundwater at Dorset, which is found well below the surface in the granitic gneiss and amphibolite bedrock.

Almond kernel necrosis was indicated by a kernel with brown or black necrotic areas

Maier et al. also reported similar retention of total anthocyanins in gels stored for 24 weeks at 6 ◦C and 24 ◦C. The lower amount of anthocyanins recovered in the gummies stored at 4.4 ◦C compared to the same product stored at 21 ◦C may be explained by reduced extraction efficiency due to the hardening of the gel at low temperature, as opposed to degradation late during storage. Changes in the major individual anthocyanins in the gummy product stored at 4.4 ◦C and 21 ◦C over eight weeks of storage are shown in Figure S5. At 4.4 ◦C, all the individual anthocyanins decreased with storage time with retentions <50%, except for the two co-eluting anthocyanins galactoside + cyanidin-3- galactoside and malvidin-3-glucoside . The percent retentions of the rest of the anthocyanins at this storage temperature ranged from 29.3% to 49.2%. When stored at 21 ◦C, two anthocyanins did not significantly decrease with time, namely the unknown delphinidin derivative and the two co-eluting compounds galactoside + cyanidin-3- galactoside. For the rest of the anthocyanins, percent retentions ranged from 40% to 71% glucoside. In all the products, the individual anthocyanin loss did not appear to be impacted by the anthocyanidin structure or the type of sugar moiety attached .The distribution of the products according to their individual anthocyanin profile as affected by storage time can be visualized on a PCA scores plot . The first principal component explained 83.9% of the variation with all the individual anthocyanins being positively loaded on PC1. Therefore, PC1 represents the amount of individual anthocyanins.

The juice and ice pop samples had high scores on PC1 . The oatmeal bar samples also had positive scores on PC1 for the earlier storage times,blueberry plant pot whereas the oatmeal bar samples stored at 21 ◦C for eight weeks were the only oatmeal sample to havea negative score. Except for the control samples , all the graham cracker cookie samples had negative scores on PC1, regardless of the storage temperature. Finally, all gummy samples had negative scores on PC1, with scores becoming smaller with storage time. The PCA figure confirmed higher values of anthocyanins in the juice and ice pop samples, as well as, to a lesser extent, the oatmeal bars. The graham cracker cookie and gummy samples did not demonstrate high values for anthocyanins, with a clear loss of anthocyanins with storage time for the gummy samples.Percent polymeric color values typically show an inverse correlation with total anthocyanins during storage of blueberry products , and inverse correlations with each individual anthocyanins in all the products and storage temperature . Higher percent polymeric color values indicate that a higher percentage of anthocyanins are resistant to bleaching in the presence of potassium metabisulfite. Since the sulfonic acid adduct attaches at C4 on the middle heterocyclic ring, it is thought that anthocyanin–procyanidin polymers are formed via a direct condensation reaction, resulting in a C4–C8 anthocyanin–procyanidin linkage as the major polymers formed in blueberries during storage. Hence, it is possible that declines in anthocyanins during storage of the blueberry products are not true losses due to degradation, but the conversion of monomeric anthocyanins to anthocyanin–procyanidin polymers. Anthocyanins can be degraded via a hydration reaction, where the flavylium ion is converted to a hemiketal structure, which is rapidly converted to cis-chalcone, which slowly arranges to a trans-chalcone structure.

The trans-chalcone structure is highly unstable and rapidly degrades to hydroxybenzoic acid derivatives. However, we do not consider that this reaction was responsible for anthocyanin losses in the blueberry products over storage since we did not observe an increase in phenolic acid derivatives in our HPLC chromatograms at 280 nm . The stability of chlorogenic acid in the four blueberry products stored at 4.4 ◦C and 21 ◦C is shown in Figure 5. Chlorogenic acid was stable in all products over storage regardless of storage temperature,except for the juice and oatmeal bar stored at 4.4 ◦C, where levels significantly decreased . At 4.4 ◦C, the chlorogenic acid content decreased from 4.3 to 3.6 mg/g WBB powder in the juice and from 3.0 to 2.6 mg/g WBB powder in the oatmeal bar. At 21 ◦C, chlorogenic acid in the juice showed a slight increasing trend; however, this change was not statistically significant . Chlorogenic acid was also stable in the ice pop over eight weeks of storage at −20 ◦C , with an average value of 6.5 mg/g WBB powder over storage . Initial levels of chlorogenic acid were higher in all products stored at 21 ◦C compared with 4.4 ◦C storage, which may be due to the variation in processing the two sets of samples for the storage study, or possible the degradation of chlorogenic acid in the WBB powder used to prepare the products. The WBB powder used to prepare samples for the refrigerated storage study was stored at 15.5 ◦C for three months prior to preparing the samples. Blueberries contain polyphenol oxidase, which can readily oxidize chlorogenic acid. Chlorogenic acid was previously found to be stable in blueberry juice, puree, and canned berries stored for six months at 25 ◦C, but blueberry jams lost 27% of chlorogenic acid over six months of storage at 25 ◦C. Leaf footed bugs in the genus Leptoglossus Guérin-Méneville are large phytophagous insects native to the Western Hemisphere. At least 61 species are known, and several species are pests in forests or agricultural systems.

Many Leptoglossus spp. are multivoltine, which allows them to exploit multiple hosts per year. Direct damage to crops is caused when Leptoglossus spp. feed by probing their stylets into fruits and seeds, and secondary damage can occur through the transmission of pathogens at the feeding site. Field studies assessing the feeding damage of insects can provide information about the phenology of the pest and pinpoint when during the growing season insect feeding occurs, as well as determine when the crop is susceptible to damage or losses. Two species of Leptoglossus, Leptoglossus clypealis Heidemann and Leptoglossus zonatusare occasional pests feeding on almond and pistachio crops in the Central Valley of California. L. clypealis was considered to have a more limited distribution in the western United States, but is now reported to occur through the Midwest into Illinois,plastic gardening pots with some additional records from the east coast. While L. clypealis is noted in California for infesting almonds and pistachios, it has been recorded from at least twenty host plants throughout its range. L. zonatus is found in much of the Western Hemisphere ranging from Brazil into the southern United States on a wider range of host plants including citrus, pomegranates, almonds, and corn, among others. In California, Leptoglossus spp. are reported to overwinter in adult aggregations. As temperatures warm up in the spring, the adults disperse from aggregations and can be observed in almond orchards. Feeding by L. zonatus and L. clypealis on almonds results in clear sap exuding from developing fruit, known as gummosis. Early season feeding by these two species in March and April can result in almond drop, while feeding later in the growing season can directly damage almond kernels and result in losses. Both L. zonatus and L. clypealis are reported to be more abundant in the last few years, perhaps due to increased plantings of almonds in California. Approximately 1.36 million acres of almonds were cultivated in California in 2017, with an estimated value of $5.6 billion. Determining the level of damage from feeding L. clypealis and L. zonatus during the growing season in a field experiment will help determine the relative damage from each of these insects, and demonstrate when the almond crop is most vulnerable to Leptoglossus feeding, which in turn will help to determine the timing of prevention and control measures.

The objectives of this work were to determine the level of almond drop from feeding by adult L. clypealis and by L. zonatus, compare how almond drop varies during the growing season, consider almond size and its relationship to feeding damage, and quantify the final damage to almonds at harvest time from feeding by L. clypealis and L. zonatus. The effect of adult L. clypealis and L. zonatus feeding was evaluated on four almond varieties during the growing season from the end of March untilmid-August. The four experimental treatments included controls, mechanical damage to the developing almonds, feeding by adult L. clypealis and feeding by adult L. zonatus. All four treatments included an almond branch with approximately 20 almonds, covered by a sleeve cage consisting of a 5-gallon organdy mesh paint strainer closed with a large binder clip. Control branches with almonds served to determine the natural level of almond abscission during the growing season. The second treatment consisted of branches with almonds which were mechanically punctured to mimic feeding damage caused by the insect stylet probing into developing nuts. Each developing almond was punctured 4–5 times with a #1 insect pin . Puncturing almonds served an additional purpose, which was to provide an estimate of the time of shell hardening; shells typically became resistant to puncture by the end of April. The third treatment consisted of 5 adult L. clypealiswhich were allowed to feed for 4–6 d and were then removed. The fourth treatment was similar to the third but used 5 adult L. zonatus . For these treatments, insects were taken from the lab colony, and were first isolated with only water for 24 h before placing them into an experimental sleeve-cage. Each week in each almond variety, four branches were setup as controls, four were setup with punctured almonds, one branch each was setup with L. clypealis, and one with L. zonatus. For approximately eight weeks, the four treatments were replicated in the same manner on new trees in each of the four almond varieties. For Monterey and Carmel varieties in 2014, fields could not be entered on two weeks due to flood irrigation, and this resulted in six weeks of observations rather than 8. In 2015, the same experiment was repeated on the same four varieties, with the exception that feeding damage was assessed for L. zonatus but not for L. clypealis, due to an insufficient numbers of adult L. clypealis to complete the experimental replicates. Each week, data were recorded on the number of almonds fallen from branches within all cages that were setup in previous weeks. These data were used to determine the mean percent almond drop in each of the four treatments for each of the four varieties. An analysis of variance test was considered to compare means, but data were not normally distributed, even after log transformation. Thus, nonparametric Kruskal–Wallis tests were used as they do not assume a distribution for the data. Post-hoc pairwise comparisons were by Steel–Dwaas tests and were considered significant if p < 0.05. In 2015, similar comparisons were made for the mean almond drop for three treatments within each almond variety. To examine when almonds were most susceptible to almond drop from feeding by each Leptoglossus species, the mean percent almond drop was compared among the experimental weeks within each bug feeding treatment. A Chi-square goodness of fit test was used to first examine whether the percent almond drop from a Leptoglossus species was equal among the weeks of the study for each almond variety. If almond maturity had no impact on insect feeding damage, the percent drop would be equal among weeks of the study. When a significant difference among weeks was observed, subsequent tests were by Fisher’s Exact tests to compare pairs of weeks . Just before harvest, the almonds remaining in field cages were removed to conduct a final damage assessment. For each control and each branch with mechanically damaged almonds, a sub-sample of four almonds was used to assess several damage parameters. For branches caged with L. clypealis or L. zonatus, all remaining almonds were removed and used for the final damage assessment. Four parameters of feeding damage were determined, hull strikes, almond kernel necrosis, strikes on the kernel, and shriveled kernels. A strike on the hull was characterized by a black or brown spot. A strike on the kernel was the third type of damage. The fourth and final damage type was whether or not the almond kernel was shriveled. Damage was recorded for each category as presence or absence.

Soil water content is a key control on plant growth and health

Decreased extraction despite crop intensification was largely enabled by increases in irrigation efficiency, including decreased water losses during transport to fields and basin-wide implementation of drip irrigation . Drip irrigation is known to produce much less off-field sediment transport during the irrigation season than sprinkler and furrow irrigation techniques. Thus, large scale conversion from furrow to drip irrigation led to decreases in irrigation water use, and may have also played a role in decreasing Salinas River suspended sediment concentrations in the latter 20th to early 21st centuries.Decreasing trends in suspended sediment concentration-discharge relationships were observed in the lower Salinas River from 1967-2011 despite increasing activities of wildfire and agriculture in the watershed over this period. Increases in effective burn area and total crop area have been generally found to increase sediment production at the watershed scale. Shifts in crop structure were dominated by a rise in row crops over this time period, which would also have been expected to increase sediment production. Row crop fields of are often left bare over the winter, rendering them prone to rainfall/runoff driven erosion, and degradation of the necessary drainage networks of earthen ditches can result in further increases in sediment export . With the exception of changes in irrigation practices, potential control of decreasing suspended sediment loads from other anthropogenic activities can be discounted due to limitedareal extent or timing. Urbanization increased,blackberries in containers but only to approximately 2% of the Salinas River watershed area.

While urbanization can lead to decreases in discharge-corrected CSS values by increasing the production of runoff from precipitation without concomitant increases in sediment production , no shift in the PQ relationship was observed in the Salinas River between 1967 and 2011 . Conversely, the damming of Salinas River subbasins and attendant sediment trapping has been estimated to have significantly decreased sediment flux relative to pre-dammed conditions . However, dam emplacement in the Salinas Watershed occurred before the period of suspended sediment record, and the trapping characteristics of their reservoirs are not expected to have changed significantly over the intervening years . Wildfire activity was insufficient to counteract the negative inter-decadal trend in suspended sediment load, even though the years with the highest effective burn areas in the Salinas River watershed fell toward the end of the record. There was some indication that the large fires preceding the 1978 water year, coupled with the high Q-producing storm events of that year, may have increased suspended sediment load in agreement with the findings of Warrick et al. . However, other years with high effective burn areas and relatively high Q-intensities did not express consistent increases in CSS. The lack of wildfire control on inter-decadal scale trends in sediment loads may be due to issues of scale and the areal extent of burning. Often findings of dominant wildfire control come from studies of small, headwater catchments that have experienced burning over a high proportion of land area . Indeed, the suspended sediment flux from the Arroyo Seco subbasin was found by Warrick et al.to be highly controlled by the coincidence of wildfire and large storms on the basis of two instances of nearly complete burning of the watershed’s oak and chaparral shrublands.

The role of fire in maintaining chaparral vegetation communities and dominating sediment production in the semi-arid foothills and low mountains typical of coastal central and southern California has been extensively reported . In contrast, the undammed Salinas River watershed is an order of magnitude larger than the Arroyo Seco, with about half the average relief, and extensive agricultural development consisting of irrigation agriculture in lowlands and extensive grasslands on lower foothill slopes. As a result, even the largest EBA calculated for the Salinas River were only 10% of the undammed watershed . Thus the larger land area of the Salinas River watershed with numerous tributary drainages and divides and attendant mosaics of vegetation and microclimates has resulted in a mosaic of small wildfires relative to watershed size during any given fire season. Disconnected fire patches would be expected to produce less effective transfer of fire generated hillslope sediments to channels, in contrast to that of a completely burned watershed. Lowlands in the mainstem drainage network of the Salinas River also likely present a sink that moderates the signal of hillslope sediments produced from burned land surface, further obscuring the signature of more extensive burn years . Lavé and Burbank found a similar disappearance of wildfire control on inter-decadal scale sediment production when scaling up from small, 10-1 to 101 km2 scale headwater subbasins to 102 km2 scale watersheds in the San Gabriel Mountains of the southern California. Furthermore, semi-arid systems with intermittent flow and high discharge losses to groundwater recharge like the lower Salinas River tend to have longer residence times for suspended sediment due to increased incidence of in-channel deposition, particularly during flows into dry channels . Previous and continuing alterations to the Salinas River and its watershed may have exacerbated the attenuation of hillslope sediment production signals by destabilizing the channelized system of the lower Salinas and drawing down the ground water table.

Widespread deforestation along the banks of the lower Salinas River in the 19th century seemed to have decreased bank strength and led to a transition from a single meandering channel to a disorganized sandy active corridor, with localized and incipient braiding, which persists today. Intensive groundwater pumping remains above replacement , which could further exacerbates this scenario of lowland moderation of highland sediment production signatures by increasing the proportion of channelized flow abstracted to groundwater recharge . Indeed, early wet season flows have been observed to completely attenuate before reaching gauges S1 and S2, and thus completely depositing their suspended sediment loads into the channel . A more direct human cause of the overall negative trend in CSS, and the period of low CSSf that has persisted since the mid-1990s, is the conversion of agricultural operations to drip irrigation. Previously dominant methods of irrigation, particularly furrow, were known to produce large amounts sediment from off field transport and irrigation canal erosion . Drip irrigation has been shown to result in much lower off-field transport of sediment than sprinkler and furrow methods , and was introduced to California in the early 1960s. Large scale shifts in agricultural practices toward drip irrigation was contemporary with the decrease in suspended sediment concentration-discharge relationship observed for fine and sand sized sediment in the lower Salinas River. Although drip irrigation was used for only ~ 14% irrigated land coverage by 1993, over the next 17 years land area under drip irrigation quadrupled, replacing sprinkler and furrow methods as the primary irrigation practice in the Salinas Valley. This change in irrigation technology may be the dominant driver of not only the decreasing fine and sand sediment production trend found from 1967-2011, but also the timing of the departure from the hydrologic and climatic controls that Gray et al.found for fine sediment in the late 1990s to early 2000s. Drip irrigation has largely replaced older methods of irrigation for certain crops throughout California and other semi-arid or dry-summer climatic regions over recent decades, primarily due to increases in yields of high value row crops such as tomatoes . Widespread adoption of drip irrigation for such crops could have the unintended side-effect of reducing the entrainment of agricultural sediments into fluvial systems in these regions, which may have beneficial water quality consequences. Fluvial sediments are the greatest single impairment of rivers and streams in California,blackberry container many other parts of the U.S. and the world . Furthermore, agricultural sediments are often exposed to surface reactive nutrients such as phosphates, and multiple pesticides, many of which are hydrophobic and primarily transported off-site in association with fine sediments . Although winter season erosion remains an issue on such fields in single cropped areas, the time period between pesticide application and off-field is much longer for sediments eroded during the winter, perhaps decreasing winter off-field sediment associated pesticide fluxes. Thus, increases in drip irrigation use could yield a potential benefit in reducing the delivery of agricultural sediment to water bodies, particularly during the times when these sediments are most contaminated.It is well recognized that the inadequacy of conventional approaches for characterizing the key parameters and monitoring the key processes at over large enough areas yet with high enough resolution hinders our ability to optimally manage our natural water resources.

High resolution geophysical methods, such as GPR, hold promise for improved and minimally invasive characterization and monitoring of the subsurface. Here, we review several case studies where we have successfully used GPR for a variety of environmental and precision agricultural investigations. Section 2 focuses on the use of GPR for estimating parameters that are important for environmental and agricultural applications, such as hydraulic conductivity, sediment geochemistry, lithofacies zonation, and water content. Section 3 focuses on the use of time-lapse geophysical methods for assisting with remediation investigations, such as for to detecting biogeochemical hydrological processes that occur during remediation and the distribution of remediation amendments. This collection of case studies illustrates the utility of GPR for environmental and agricultural applications. Geophysical data are being increasingly used in hydrogeological site-characterization to obtain a better understanding of heterogeneity and its control on flow and transport. Such data can bridge the gap between the typically sparse conventional field characterization data and the need to realistically parameterize numerical transport models. In this section, we focus on the use of GPR to estimate hydrogeologic parameters that are important for agricultural and environmental studies, such as: water content, lithofacies zonation, hydraulic conductivity, and sediment geochemistry. Many of these studies also involved the development and use of stochastic estimation methodologies, which have enabled us to systematically fuse GPR and other sparse but direct measurements. Recent studies have shown that careful irrigation management can have beneficial effects on many crops, including almonds, citrus, prunes, pistachios and wine grapes. In particular, moderate water stress on grapevines early in the growing season can have a positive impact on grape quality. Thus, understanding when and how much irrigation to apply is critical for optimized wine grape production. Natural geologic processes, however, can cause soil variations and associated water-holding capacity to vary significantly, even over distances of a few meters. Given that the “industry standard” to vineyard soil characterization is to collect soil or water content measurements on a 75 m grid, grape growers typically do not have enough information about water content variations to guide precision irrigation. We have used GPR methods to estimate soil water content within agricultural sites in a non-invasive and manner and with high spatial resolution. Using 900 MHz GPR ground wave travel time data, we have estimated soil water content distribution in the top 15cm of the soil layers at high spatial resolutions and as a function of time at the Robert Mondavi Vineyard in California. Comparison with conventional ‘point’ soil moisture measurements, obtained using time domain reflectometry and gravimetric techniques revealed that the estimates of GPR-obtained volumetric water content estimates were accurate to within 1% by volume. The density of the obtained water content estimates was perhaps the highest density shallow moisture measurements obtained to date; the study produced 20,000 measurements of soil water content over the 3 acre study site. Water content distribution in deeper layers can also be obtained using GPR reflection arrivals if sufficient contrasts in dielectric properties exist. For example, Figure 1 illustrates the average volumetric water content estimated using data from 100 MHz GPR reflections associated with a subsurface channel, located 1.0-1.5 m below ground surface at a 2 acre Dehlinger Vineyard Chardonnay block in Sonoma County. Figure 1 shows how the channel has influenced subsurface water distribution, and suggests a correlation between water distribution and canopy density . At this site as well as the Mondavi site, it was clear that water content distribution was linked to soil textural and canopy vigor variations. Huisman et al. provide more information about the use of GPR for water content estimation. We have used cross-hole geophysical data to provide multi-dimensional estimates of hydraulic conductivity at a DOE bacterial transport site located near Oyster, VA.

The specification of a formal criterion function would allow the search for the optimal set of policies

We use our estimates of the response of global crop productivity to temperature change as an input to the GTAP CGE model in order to determine how national economic welfare is affected by climate impacts on crop yields. Figure 5 gives the global damage functions within the impacted sectors. Among all the cases that include CO2 fertilization, welfare changes are negligible at 1–2 degrees of warming, becoming negative at 3 degrees. In contrast, the No-CO2 case shows substantial global welfare losses, even at 1–2 degrees. Uncertainty bounds are large meaning no cases are statistically different from the reference case at the 95% confidence level. But the uncertainty in potential yield losses is also highly asymmetric: the possibility of large welfare losses is substantial whereas welfare gains are both smaller and less likely, particularly for a warming of 2–3 degrees. These welfare changes depend on modeled changes in harvested crop areas, production intensity, and consumption. We believe there is a general perception that empirical studies give more pessimistic estimates of crop response to warming than do process-based models . However, there is a lack of systematic comparisons between the two methods. In particular, because empirical studies do not include CO2 fertilization whereas process-based studies generally do, it is important to account for this difference in comparing the temperature response from the two methods. Here we are able to do this statistically, showing that once CO2 are controlled for, differences between empirical and process-based responses may be smaller than generally believed. Though the point estimates do show some evidence of more negative impacts from statistical studies at higher temperatures ,growing blueberries in pots the effect is not precisely estimated and error bars are large.

The poor representation of empirical studies within the yield impacts database, particularly at higher levels of warming, is a major limitation of this analysis. Inclusion of more recent studies would help with this, but this is not always straightforward. Many recent papers report the marginal effect of growing degree days rather than average growing season temperature and converting from one to the other is not simple . Standardized reporting of the impacts of a 1 °C increase in average temperature in empirical papers would help with this and should be encouraged. In addition, as noted above, the number of points at which the continuous response function estimated in empirical papers should be sampled for inclusion in the database is inevitably arbitrary. Some standardization would be useful and would help with interpretation in the future. Another finding from this paper is that there is little evidence in the existing literature that farm-level adaptations will substantially reduce the negative impacts of climate change on yields. The results presented here suggest that many actions described as adaptation in yield modeling studies would raise yields both in the current and in the future climate, meaning they do not necessarily reduce the negative impacts of future warming. If actions would confer benefit in the current climate but are not being adopted, economic logic suggests that models may be either overestimating benefits or they may be missing important costs of implementation. In either case, the potential for within-crop, farm-level adaptations that improve yields in the future climate more than in the present climate appears limited, at least as currently represented within the studies included in the meta analysis. This paper confirms the importance of CO2 fertilization in determining the average global impacts of changing temperature over the 21st century. Our results show the question of whether or not CO2 effects are included is more important than either the inclusion of adaptation or the type of study used to estimate the temperature response.

For both maize, wheat, and rice, CO2 fertilization fully offsets negative impacts of warming up to 1–2° for the global average yield effect. This demonstrates the importance of future work to better constrain the magnitude of this benefit. While we find good agreement between our results and those derived from FACE experiments, at least for the C3 crops , there is evidence that the fertilization effect depends critically on water and nutrient availability . Capturing this heterogeneity in CO2 fertilization by crop and farming intensity could be important in improving estimates of the yield impacts of climate change at both global and regional scales. Because of the importance of the CO2 fertilization effect, it should be clearly communicated when climate change impacts are presented without CO2 fertilization, which is often the case with statistical papers and sometimes with process-based models . Finally, this paper makes the connection between models of crop productivity and economic welfare. This is an essential step for informing damage functions in the simple Integrated Assessment Models such as DICE, PAGE, and FUND used to calculate the SCC . The economic impact results further underscore the importance of the CO2 fertilization effect: global welfare effects at 1–2 degrees of warming are negative without the CO2 fertilization effect but slightly positive for cases that include it. These results also show the complex connection between yield and welfare change. Despite error bounds on yield impacts being more or less symmetric, these same yield impacts give rise to highly asymmetric distributions over welfare changes, with substantial probability of large welfare losses. This asymmetry arises, despite the fact that the GTAP modeling framework allows for a large number of economic adaptations to moderate the adverse consequences of productivity shocks including changing inputs, shifting crop areas, trade adjustments, and consumption switching.The agricultural industry is often cited as a classic example of a competitive market. The observed performance of such markets, however, is the result not only of competitive forces but also of governmental intervention. Such intervention is often motivated by equity or distributional concerns.

Typically, the impact of such governmental intervention is evaluated only in terms of output markets . Such investigations are grossly inadequate since governmental policies impinge directly on asset as well as flow markets for both inputs and outputs. In general, the distributional consequences depend upon the ownership, utilization, quality, and technology associated with the assets. This paper develops a framework for capturing the distributional implications of governmental intervention in the agricultural sector recognizing its most important features. These features include competitiveness, asset fixity, rapid technological change, and institutional limits to credit availability. The first three features are documented by Theodore Schultz, Willard Cochrane, and G. L. Johnson. Theodore Schultz has also called attention to the large differences in the rates of return to resources among regions as well as across producers. Much of this variation emanates from differences in production techniques, human capital, and wealth controlled by individual producers. The limitations of credit availability for producers of different size classes have been noted by recent empirical evidence. This evidence strongly suggests that larger farmers borrow more; they borrow more to invest in capital; and their ability to borrow more stems, in part, from their higher repayment capacity . The equity and efficiency impacts of selected government policies have been addressed by a number of different frameworks, most of which are based on aggregative relationships. For example, in the agricultural development literature, aggregative relationships are specified for an agricultural sector and a non-agricultural sector. The micro-economic foundations of these frameworks, however, are not generally specified. As a result,drainage gutter the thorny problems of aggregation are pushed aside. Also, the distributional content of results forthcoming from such models is not very rich. The purpose of this paper is to advance a framework for evaluating the impact of governmental policies on agricultural production systems that is internally consistent at both the micro-level and the aggregate level. Assuming the major source of economic growth is technological change, the framework focuses on the incentives and constraints for technological adoption. Both the efficiency and distributional consequences of various policies are shown to depend upon landownership, land utilization, and the technology associated with land assets. To accomplish these purposes, a stylized model involving two technologies, traditional and new, is specified. At both the micro-level and the aggregate level, the framework admits a number of important features including uncertainty, varying degrees of risk aversion, both fixed and variable costs of technological adoption, and credit as well as land constraints. The model design allows the evaluation of a wide array of various policies. This set of policies includes price support, credit-funding enhancement, credit subsidies, crop insurance, price stabilization, input subsidies, and extension promotion. The basic micro-economic foundations of the framework are developed in section 2. Section 3 focuses on the micro-economic behavior of various farmers under alternative policies. Aggregation operators are applied in section 4 to capture the relevant macro-level causal relationships. Finally, the concluding section examines the operational use of the framework. The focus of this paper is on the qualitative efficiency and equity effects of various policies. In the context of a simple theoretical model which incorporates a number of important features of the economic environment, propositions have been derived which reveal ~ny insights for policy analysis. However, to operationalize these propositions, a considerable amount of empirical estimation is required.

Empirical analysis must begin by decomposing the farming population into relevant classes. This decomposition can be accomplished endogenously by the specification of a discrete/continuous behavioral model. The district choice relates to tedmology while the continuous choice is the amount of explanatory variables appearing in this model include the vector of expected returns defined by technology, the variances and covariances of returns defined across technologies, the variable cost of new inputs, the opportunity cost of financial funds, the fixed setup costs of various technologies, and available credit. Estimated relationships between the above explanatory variables and discrete technology choices and continuous land allocation choices are one component of the required empirical structure. estimation of the distribution of landholdings. A second component is an One potential distribution is the Pareto distribution specified in section 4. A third empirical component must relate the distribution of farm size to risk preferences. Estimation of this relationship will most likely require the use of primary data from representative samples. The final empirical component requires a set of linking equations between the policy instruments and the specified explanatory variables. For example, the empirical relationship between price supports and the vector of mean returns and the covariance matrix of returns across technologies must be determined. Armed with these four empirical components, a number of operational uses of the proposed framework are possible. First, one can simply simulate the effects of various policies through the four empirical components to determine the most effective integration of the various policies. This potential use of an empirical version of the proposed framework can only capture the quantitative effect of the proposed policy mixes; no attempt would be made to identify the optimal set of policies. Various trade-off relationships or alternative weightings in a scalar criterion function including two principal performance measures, efficiency and equity, could be specified. Theory and’ intuitive reasoning can be utilized heavily in isolating those trade-offs which allow a set of scalar criterion functions to be examined by parametric analysis. When such critierion functions cannot be captured, again, parametric analysis can be utilized with some objectives expressed as constraints motivated, perhaps, by a lexicographic ordering and/or as satisficing arguments. Various solution algorithms that can be employed to enhance the determination of a global optimum are available . Another potential use of the four empirical components relates to the notion of political economic markets. In a positive analysis of government behavior, the four components can represent a constraint structure which, along with a specified criterion function, can be used to infer, via revealed preference methodology, the trade-off between efficiency and equity . Such a positive analysis would allow economic researchers to perform effectively a role of social critics; that is, if past policies imply a value scheme which in some sense deviates from the public interest, then the implicit choice of trade-offs between efficiency and equity should at least be debated. Along similar lines, various economic interest groups could also employ the four empirical components to determine which set of policies they are prepared to support or oppose. Cooperatives are corporations that are owned and governed by the firms or people who use them; they differ from other businesses because they operate for the benefit of their members, rather than to earn profit for investors. Cooperatives have played an important historical role in promoting the economic welfare of California’s agricultural producers.