Soil drainage was based on depth to water table and hydraulic conductivity

In particular, we highlight the combination of independent and particularly joint effects of climate and soil on trait variation, an interaction that has to date been neglected because few studies include both in a single analysis, at the global scale as we have done here. In doing so, we identify an important gap in knowledge: what is the nature of climate–soil interactions that drive whole-plant trait variation and what distinguishes the majority of climate and soil factors having joint effects on plant traits from those with independent effects? These are the sorts of questions that require answers to increase our capacity to predict plant functional diversity in a changing environment. Such predictive power would contribute to a sound basis for assessing long-term feedbacks between global environmental change and the terrestrial biosphere, helping to constrain parameters of global coupled climate–vegetation models. Humans are currently modifying both climatic and edaphic conditions at the global scale. Climate envelope models used to predict vegetation shifts must be complemented by drivers related to large-scale anthropogenic alterations of soil conditions resulting, for example, from land-use change, atmospheric nitrogen deposition, fertilization, black plastic nursery pots liming and salinization. Our global analysis provides an essential context for finer-scale studies to directly tackle questions of biological processes and mechanisms at landscape and community scales.

Fire is part of the natural disturbance regime of many boreal regions, although recent evidence suggests that anthropogenically induced climate change may be increasing the burned area in North American and Eurasian forests . Because high-latitude ecosystems store approximately 40% of global carbon stocks in biomass and soils , an amount equal to the atmospheric C pool, there has been considerable inTherest in understanding how these systems will respond to climate warming. Combustion of vegetation and forest floor transfers C directly from terrestrial ecosystems to the atmosphere, so increased burned area or fire intensity in the boreal biome could be a strong, positive feedback to atmospheric CO2 concentrations , at least in the early years after fire . At a regional scale, the effect of fire on species composition, soil drainage, and stand age distribution will ultimately regulate whether the CO2 feedback is positive or negative. The response of this long-term signal to the combination of climate change and alThered fire regime is largely unknown for the boreal biome. Patterns of plant species composition, biomass accumulation, and productivity across post-fire succession are important determinants of the amount , structure , residence time , and decomposability of C inputs to these systems. In Interior Alaska, black spruce BSP stands cover approximately 70% of the forested area and occupy landscape positions that range from permafrost-free well-drained soils to permafrost dominated and poorly drained soils . These forests are highly flammable due to their architecture and resin production as well as the thick moss layer on the forest floor , and fire return intervals range from 70 to 100 years . Wildfires tend to be large and high-intensity and while few boreal overstory species survive fire, many understory species re-sprout after fire from buried or protected meristems .

Black spruce are semi-seritonous and release seed after fire, with the majority of trees recruiting in the first 5 years after fire . Stands may or may not go through a deciduous phase where trembling aspen and tall shrubs , which may resprout after fire , dominate over the 15–50 years prior to closure of the black spruce canopy . Moss and lichen expansion across the forest floor follows similar timing, with moss cover reaching its maximum between 30 and 50 years , concurrent with canopy closure by black spruce and a reduction in deciduous litter. The deciduous phase appears to be related to interactive effects of fire severity and site drainage as evidenced by the fact that sites that burn severely , or at a high frequency have the highest abundance of aspen and willow species. At the landscape level, both the severity and frequency of fire appear to be related to soil drainage . Although patterns of species dominance over post-fire succession have been described for Alaskan black spruce stands, there are few published measurements of productivity and biomass after fire. In conjunction with a chronosequence study of soil C dynamics, O‘Neill and others used a mass balance model to conclude that C inputs balanced C losses 7–15 years after fire. Yarie and Billings used forest inventory data from stands across Alaska to estimate generalized biomass accumulation curves for black spruce green timber. They show that biomass accumulation peaked between 75 and 150 years. Simulation modeling of ecosystem C dynamics over post-fire succession suggests that C balance is most sen-sitive to N fixation, moss accumulation, organic layer depth, soil drainage, and fire severity.

Finally, there are several comprehensive studies of post-fire succession in Central and Eastern Canada , but the trees in these sites inhabit different soil drainage and temperature regimes than their Alaskan relatives, potentially resulting in different rates of ecosystem C dynamics . The goals of this study were to describe the changes in community structure and above ground net primary productivity and biomass that occur over post-fire succession in the upland black spruce forests of Interior Alaska. We present measurements that span two different time scales: recovery 1–4 years after fire and recovery over the entire successional cycle. For the former, we followed vegetation recovery for 4 years after the 1999 Donnelly Flats fire near Delta Junction, Alaska. We used a chronosequence approach for the latter by selecting two sequences of sites in the region that varied primarily in time since fire: a mesic sequence on moderately well-drained soil with permafrost and a dry sequence located on well-drained soils without permafrost . These sequences represent transitions in environmental factors that might occur with climate warming, including loss of permafrost and subsequent increases in soil drainage .This study was conducted in the Donnelly Flats area located near Delta Junction in Interior Alaska, in seven upland sites that were previously dominated by black spruce . All sites were located within a 100-km2 area on gently sloped alluvial flats that range from moderately well-drained soils dominated by permafrost to well-drained soils where permafrost was largely absent. Our study include three sites on well-drained soils that burned in stand-killing wildfires in 1999, 1987, and approximately 1921 , hereafter the dry chronosequence, and four sites on moderately well-drained soils that burned in 1999, 1994, 1956, and approximately 1886 , hereafter the mesic chronosequence. Time since last fire was determined by historical record in the younger sites and by tree ring analyses in the older sites. Some or all of these sites have been used to assess the effects of fire on soil C storage and emissions , soil chemistry , hydrogen fluxes , fungal community composition and dynamics , seasonal CO2 and 18O–CO2 fluxes and energy exchange . Within each chronosequence, sites were chosen to have similar state factors other than time . Climate: Micrometerological data collected in the 1999, 1987, and 1921 dry sites and the 1994 , 1999 and 1886 mesic sites support the idea that sites in both chronosequences experienced a similar climatic regime. The regional climate is cold and dry with an annual mean surface air temperature of 2.1C during the 1970–2000 period . Over this same period, mean temperatures in January and July were 20 C and 16.0C, respectively, and mean annual precipitation was 290 mm. Approximately 65% of precipitation fell during June, July, and August. Potential biota: Although all stands were currently or historically dominated by black spruce and were in a close enough proximity that they belong to the same regional pool of potentially colonizing organisms, 30 plant pot the understory vegetation and ground cover varied with soil drainage and stand age .

The oldest dry stand was a lichen woodland , with ground cover dominance split between feather moss and lichens. Vaccinium uliginosum and V. vitis-idaea were the most abundant understory species, with deciduous shrubs and trees, forbs and graminoids present but at low abundance. Many of the same species resprouted or recruited after fire in the 1999 dry site and dominated the understory in the 1987 dry site. Species characteristic of well-drained ecosystems that were present in all dry chronosequence sites and absent from the mesic sites were the grass Festuca altaica and the evergreen shrub Arctostaphylos uva-ursi. These species were present, however, on trails and roadsides around the mesic sites. The oldest mesic stand had continuous feathermoss ground cover and a high abundance of Vaccinium spp. Feathermoss occupied almost the entire ground surface in the 1956 and 1886 mesic sites. In the 1994 mesic site it persisted in patches that appeared to have escaped burning. Vascular nomenclature follows Hulte´n and non-vascular nomenclature follows Vitt and others . Relief: Sites in both chronosequences were within a 100- km2 area with little variation in slope or topography . Parent material: Soils along both chronosequences were mainly derived from the Donnelly moraine and wind blown loess and have been described in detail elsewhere . Differences in drainage between the chronosequences are thought to be related to differences in water table depth and texture . Although great care was taken to control state factors within and between chronosequences, it was difficult to fully constrain the effects of past fires on productivity or biomass pools. In the 1999, 1994, and 1987 sites, fires were stand replacing . In the 1957 mesic site, the relatively small range of tree sizes suggests a single cohort of black spruce. In the mature 1886 mesic and 1921 dry sites where tree sizes are quite variable, however, the number of trees sampled for age was not large enough to determine whether stands are comprised of a single cohort . At the landscape-scale, the severity and frequency of fire are likely to be related to soil drainage . At the site level, however, stochastic factors such as weather conditions, time since last fire, and neighboring vegetation can also affect fire severity. Post-fire vegetation recovery is similarly affected by stochastic processes such as timing of fire in relation to both vegetative and reproductive phenology, proximity of seed source, and/or the effects of past and present climate conditions on demographic processes. Finally, we caution the reader to keep in mind at all times that this is an observational study; we depend on the assumptions of the chronosequence approach to make inferences about time.Above ground biomass of vascular plants, mosses and lichens was measured across all sites by destructive harvest in July 2001 at approximately peak biomass. To more closely examine the dynamics of regrowth in the first several years after fire, biomass was also measured in the 1999 dry site 2 months after the fire as well as mid-summer in 2000–2002; it was also measured in 2000, 2001, and 2002 in the mature dry site for comparison. Trees less than 1.37 m in height that were excluded from the inventory described above were included in these harvests. In harvests of the 1999 dry site, we determined whether each species was a re-sprouter by assessing the presence of charred stems or large rhizomes. We also monitored species or generic richness on an annual basis in these sites by recording the presence of all species within a 144-m2 plot surrounding the 1 m2 harvest blocks. We did not survey species richness in the other sites. In each site, above ground biomass was clipped from either 6 or 12 randomly located 1 m2 quadrats. Organic depth was measured at the four corners of each quadrat and averaged. In the mesic chronosequence sites and the 1987 dry site, above ground biomass of vascular species was clipped from six 1 m2 quadrats randomly located along two 100 m long permanent transects . Mosses and lichens were collected from a 400-cm2 organic soil plug sawed from a randomly selected corner of the 1 m2 quadrat following vascular plant clipping. In the 1987 dry site and the 1994 mesic site, we also harvested tall shrubs in a 4-m2 quadrat surrounding the 1-m2 quadrat to account for their larger stature. Vegetation was harvested similarly in the 1999 and 1921 dry sites, except that 12 quadrats were harvested. Samples were returned to the lab and sorted into species and tissues within 1 day of harvest. Each vascular species was separated into several tissue categories including current year and previous year leaves, current year and previous year stems, and fruits or inflorescences following methods modified from Shaver and Chapin and Chapin and others .

The respective data are too scarce to yet be integrated with global datasets

Individual-level daily egg laying data for three fruit fly species were used for the analysis for the two tephritid fruit fly species including the Mediterranean fruit fly , commonly known as the Medfly , the Mexican fruit fly , commonly known as the Mexfly , and the vinegar fly, Drosophila melanogaster . Husbandry details for each species are described in the above-cited papers. In brief rearing conditions were 25-27° C and 50-75% RH, and 12:12 L:D for the two tephritid species and 10:14 L:D for D. melanogaster. Whereas the two tephritid species were fed a 3:1 sugar-yeast hydrolysate diet, D. melanogaster females were fed a standard agar-gelled Drosophila food medium. Eggs were collected from mesh at one end of the individual cages for the tephritid females and from the food medium for D. melanogaster . These species were chosen primarily for the availability of databases on individual-level lifetime egg laying. However, they were also chosen because they allowed for three levels of comparison to test the robustness of our discoveries—between two dipteran families , between two tephritid genera , and among three different species.Reproductive data were excluded for the first 10 days of adult life for each of the species, a period during which all individuals matured and developed their first eggs. The remaining data for each fly were then parsed into 1-of-2 segment categories: Terminal segment consisting of the sequence of daily reproduction from 11 days before death to and including the day of death. There were 873, 1,071 and 425 terminal segments derived from individual female Medflies, raspberry containers size the Mexflies and D. melanogaster, respectively; and Midlife segments of the same length. The midlife segments were created from contiguous 11-day series of individual flies that started after maturation and ended 11 days before the fly died.

This selection criteria thus excluded individual flies that lived less than 32 days . This segmentation process was repeated until the number of remaining days to the terminalsegment was less than 11. The remaining days were ignored. The 11-day egg-laying sequences were chosen primarily because the results of preliminary investigations revealed that 11-days was the shortest period of egg laying that contained patterns that yielded consistently high performance metrics across all three species. This period was also chosen because longer egg-laying periods would have substantially reduced the number of segments in each of the databases. We computed two metrics from all of the 11-day egg laying sequences for all three species and the terminal and mid-life segments: 1) Total eggs . This metric was chosen based on the observation that most flies laid fewer eggs at the end of life; and egg-laying ratio . This metric was based on the observation that the relative rate of egg laying in most flies decreased at the end of life. This metric was computed as the ratio of the number of eggs produced from 11 to 6 days prior to death to the number of eggs produced from days 5 to 0 prior to death. This ratio specified both a direction and a magnitude of change as an individual fly approached death. Egg ratio was assigned a value of ER=0 if no eggs were laid in either time interval and a value of ER=5.0 if eggs were laid in the first 6 days but none in the last 5 days—a pattern that suggests rapid egg-laying decline to zero but that results in a zero in the denominator . This value was chosen because it was at the mid-range of the highest ER-values when eggs were laid in both first and last segments. This ER value was high enough to serve as a major change metric between the first and last egg-laying subsegments but not so high to be the overriding drivers of the statistical outcomes.Fig. 1 shows the egg laying patterns of selected individual medfly females at 10 different life table deciles.

With the exception of female #5, the egg laying patterns during the terminal segments were consistent with the hypothesis that rate of egg laying near the end of each of their lives was both low and decreasing . However, the egg-laying patterns that characterize the end of female’s life are also sometimes observed at times when they are not approaching death. Indeed, there are also a number of 11-day midlife sequences of some flies that are indistinguishable from these same patterns in the terminal phases. For example, egg laying rates decrease in fly #5 from days 20 to 30 and in fly #8 from days 30 to 40 days. Although these decreasing egg-laying patterns usually predict impending death, both of these flies another 11 days beyond these ages. Similar egg-laying trends are also evident during the midlife of fly #9. Fly #10 produced very few eggs for a 40-day period from ages 20 through 60 days. Thus this visual inspection of egg laying in 10 different individuals reveals the statistical challenge of classifying egg-laying sequences as either terminal or mid-life.Although computing all three of these parameters is straightforward in both chronological or thanatological age, the values for and interpretations of GRR and T differ between the two age categories. Consider the following hypothetical case for clarifying the differences. Suppose that in chronological time, the age-specific egg laying in four female fruit flies was identical for the ages each were alive with reproductive peaks at 50 eggs/day on day 20. However, they each die at different ages 20 days apart—i.e., at ages 20, 40, 60 and 80 days. When their reproductive schedules are considered in thanatological ages, which is to say, relative to their age of death rather than their age of birth , then their reproductive peaks are in thanatological time correspond to ages 0, 20, 40 and 60 days . Although R0 remains the same since every female still produces the same lifetime number of eggs, the values of both GRR and T will in the vast majority of cases be different because the timing of reproduction is relative to age of death rather than to age of birth .

Whereas day 20 was the average age of peak reproduction in chronological time for the hypothetical example, it is day 30 in thanatological time [i.e., /4].A 2×2 contingency table that classifies prediction is shown in Fig. 2 where the numbers along the major diagonal represent the correct decisions made, and the numbers in the other diagonal represent the errors . If the instance is positive and the instance is a positive, it is counted as a true positive. However, if it is classified as a negative it is a false negative. If the instance is negative and the instance is a negative, it is counted as a true negative. However, if it is classified as a positive it is a false positive. This contingency table forms the basis for a number of common metrics given below.Event-history reproductive charts plotted in both chronological and thanatological time for all three fruit fly species are presented in Fig. 3. Several aspects of these graphs merit comment. First, the chronological plots of egg laying patterns for all species reveal the familiar progression starting at eclosion from the pre-reproductive, raspberry plant container maturation period, followed by a periods of high reproduction with relatively high levels of intra-individual and inter-species variability. This period, in turn, is followed by a period of tapering off, the length of which depends largely on an individual’s lifespan. The low levels of egg laying are evident at older ages in all species but is most striking in the oldest D. melanogaster. Second, the event-history plots in thanatological time reveal visually the repositioning of reproduction that occurs when the schedule is normalized with respect death rather than birth . This shift is especially evident in diagonal band of high reproduction that tracks to the left of the cohort survival the curve. In these cases the most advanced ages in thanatological time correspond the very youngest and thus most fecund ages in chronological time. Third, patterns of egg laying near death as seen in all three species and for both time frames differ between short-lived and long-lived individuals. This is outcome of the differences in the underlying “causes” of death at young and old ages. Increasing frailty due to old age is the most likely “cause” of death in the longest lived individuals. This accounts for the progressive decrease in egg production at older ages. However, increasing frailty due to aging is an unlikely “cause” of death for flies that die young and at ages when they are at or near their peak in egg production. Thus no single egg laying pattern or combination of patterns will likely ever apply to all flies regardless of age or cause of death.Comparisons of the average reproductive rates and timing for all three species plotted with respect to both chronological and thanatological ages are given in the different panels shown in Fig. 4. Because the number of eggs laid by the average female in her lifetime is the same regardless of whether the eggs are summed from birth to death or from death to birth, net reproductive rates, R0, will be the same regardless of whether it is considered in chronological or thanatological time. But as noted in Methods section, this not the case for gross reproductive rate computations.

For example, the 50% greater value of GRR for thanatological age relative to the value of this metric for chronological age in D. melanogaster is the result of a subset of extremely long-lived individuals who both matured early and produced many eggs at young ages. When re-plotted these young ages in chronological time represent the “old” ages in thanatological time. Thus individuals who are both long-lived and highly fecund at young chronological ages represent a large fraction of the small number at the tail end of the “death” cohort. Differences in the values of the mean age of parenthood, T, across species for chronological versus thanatological ages revealed that it was 8 days closer to birth than to death in D. melanogaster, but slightly over 4 days closer to death than to birth in the Mexfly and nearly equidistant from birth and death in the Medfly .The means and frequency distributions of the two independent variables we use in the regression model for each of the fly species are shown in the series of plots contained in Fig. 5. These graphs anticipate the outcome of the modelling results by revealing the differences between the metrics in the midlife segments relative to the terminal segments. There were striking differences in the metrics for each of the two categories of segments in D. melanogaster with nearly 5-fold fewer eggs and 3-fold greater average egg ratio in the terminal segments. The signs of the differences in the mean and overall distributions for the two tephritid species were similar but the magnitudes of the differences were much less. We thus anticipate more favorable performance metrics for distinguishing between terminal and mid-life egg laying patterns in D. melanogaster than in the two tephritid species.The logistic regression model yields three general results. First, the model’s overall performance supports the concept that the egg-laying patterns of total eggs and egg ratio as individual flies approached death are, in the majority of cases, distinctly different from those patterns over an 11-day sequence in middle of their lives. This wasevident in the performance metric of fraction correct —i.e., the FC-value for all species exceeded 0.64 using all data and exceeded 0.73 using only the segments in which flies laid 25 eggs or greater . Second, with a minor difference for several of the D. melanogaster metrics, the performance of the regression model was more favorable when applied to the censored data than with use of the uncensored data. The reason for the differences was because there were 11-day egg-laying sequences in which few or no eggs were produced or that egg laying was declining in midlife. These are the patterns for the independent variables that were associated with and thus predictive of the terminal segments. These midlife patterns occurred more in in the two tephritid species than in D. melanogaster and thus helps explain the higher performance levels for the regression model in this species. Third, the performance metrics for D. melanogaster were extraordinarily high relative to those for the two tephritids as well as absolute.

The medium was also supplemented with sucrose and defibrinated sheep blood

These weak correlations, combined with the rarefaction curve, mean that we needed to rarefy samples to make proper comparisons without jeopardizing the accuracy of sample diversity or losing information on OTUs, particularly rarer ones, in the data set. Hence, we chose the depth of 30,000 based on the rarefaction curve so that the number of OTUs in most samples could be accurately represented.After rarefaction, we compared the number of OTUs to that before rarefaction. Just like the preliminary experiments, rarefying to an even depth reduced the absolute numbers of OTUs across all samples while retaining the relative trends . The similarity between the OTU distributions before and those after rarefaction speaks to the efficacy of this technique in terms of standardizing sample sizes while avoiding loss of useful comparative information about the community structure. To look at diversity more critically from a different angle, we also examined the inverse Simpson’s index vales before and after rarefaction . In this case, rarefying to 30,000 led to somewhat more pronounced effects on this index than on the number of OTUs. For all samples except for pure E. coli, rarefaction shrank the range of the inverse Simpson’s index values, thus making samples look more similar to one another. For instance, blueberry plant pot the culture with a value of more than 12 before rarefaction , compared to the rest of the cultures with values less than 5, saw a reduction to approximately 4, compared to the rest of the cultures that had inverse Simpson’s values of 3 or lower.

Nevertheless, the relative diversity rankings across samples were retained, and the difference between that particularly diverse culture sample and the rest of the cultures was still pronounced enough after rarefaction that this statistical procedure remained valid for temporal cultures. Principal Coordinate Analysis of all samples with spike-ins shows distinct clusters of cultures, controls, and plaque . As in the preliminary experiments, rarefaction does not visibly change the clustering patterns except for a few cultures of longer incubation times . Bar plots of read counts for negative controls show that the Escherichia-Shigella OTU, the major OTU in the E. coli spike-ins, dominates the negative controls for Host 3 until the 168-hour incubation time. For Host 1 and 2 controls, this OTU is not consistently dominant as incubation time increases, and the lack of consistent dominance is shown clearly in bar plots of relative abundances of controls . In this figure, we see that Escherichia-Shigella OTU0001 occupies more than 90% of the read counts in Host 1 for only the 12-hour controls and in Host 2 for the 12-hour controls, one 24-hour control, and one 48-hour control. After 48 hours, the relative abundance of the Escherichia-Shigella OTU does not consistently decrease as incubation times increase, and the relative abundances of the spike-in OTU differ across the two wells from the same incubation time, especially for the 168-hour controls in all hosts. These results are somewhat difficult to interpret because of the uneven sequencing depths of the controls , as well as the experimental setup in which we sampled from distinct sections of the well plates instead of the same wells.

However, one observation is clear – the contamination is still entirely internal, i.e. sourced from the cultures from the same plate. Contaminated controls in all hosts are dominated by Streptococcus OTUs until Veillonella OTUs take over at 168 hours, and as we will observe later, this succession occurs in the cultures as well. Together, these results validate that our methodology prevents external contamination. The relative abundances of cultures and plaque, with spike-ins, are shown in Fig-ure 21. The plaque samples contain very little biomass, as expected while cultures from all hosts contained high biomass from oral bacteria, as evidenced by the dominance of non-Escherichia-Shigella OTUs. Dominance of the oral bacteria, however, is not always consistent between the 2 wells with the same incubation time. This is especially visible in the 48- and 96-hour cultures from Hosts 2 and 3, where Wells 1 and 2 differ in terms of relative abundances of oral bacterial OTUs. Oral bacterial dominance also does not always increase with increasing incubation time. In fact, there seemed to have been a decrease in oral bacteria biomass, relative to the E. coli spike-in, from 48 hours to 96 hours for Hosts 3. Once again, these results are difficult to interpret because the wells being sampled at these time intervals were different wells. The inconsistency in the relative abundances of the oral bacterial OTUs could have easily arisen from differential growth rates of different wells. Such a possibility seems especially likely in light of the substantial differences in the read depths of the cultures . However, a definitively consistent trend did surface from this relative abundance data: Cultures from all 3 hosts show a predominance of Streptococcus OTUs for both the 12- and 24-hour cultures, followed by the rise of Veillonella OTUs , which is in turn followed by a shift toward Prevotella OTUs .

Host 1 relative abundances show the earliest visible appearance of the Prevotella OTUs at the 96-hour mark while Hosts 2 and 3 only show the presence of Prevotella OTUs in the 168-hour cultures. Such changes in these community compositions indicate a shared succession of colonization, much like the sequential colonization of the human oral cavity in vivo, where members from the Streptococcus genus lay the ground work, and those of the Veillonella and Prevotella genera follow as either middle or late colonizers. It is also interesting to note that regardless of initial plaque composition, even without continuous inoculation of the in vitro cultures by plaque, the communities in these experiments evolved to include OTUs from the Streptococcus, Veillonella, and then the Prevotella taxa. It is possible, then, that members of the Veillonella and Prevotella genera stay dormant and/or protected until they can proliferate in the environment created by Streptococcus OTUs, though this succession would be best tested by co-culturing known strains. We can observe an interesting similarity between the relative abundances of negative controls and those of the cultures/plaque samples. The community composition changes in the contaminated controls track the changes in the cultures, albeit at a slower rate; controls in these experiments did not reach a mature enough stage to include the Prevotella OTUs. It is likely that because the controls received the inoculum later during the incubation period – at some point between 12 and 24 hours rather than at hour 0 – their development seemed delayed compared to the cultures. After we examined the biomass of the controls and cultures, using the E. coli spike-in as a qualitative standard, we removed the major spike-in OTUs in order to assess sample similarities. We performed PCoA on the cultures and plaque samples without spike-in OTUs, and the results clearly show three clusters of samples regardless of rarefaction status . Plaque samples cluster loosely in a group separate from the cultures , as expected because of the inherently selective nature of in vitro culturing procedures. The large spread in the plaque samples is also expected because these samples originated from different hosts, though this cluster did not show three finer subclusters. Unlike the plaque samples, cultures cluster into two distinct groups, separated by the length of incubation time. The 12- and 24-hour cultures cluster somewhat tightly together while the 96- and 168-hour cultures cluster tightly together , with some spread of the 48-hour cultures in between . The two clusters are also situated somewhat symmetrically across the loosely vertical line formed by the plaque samples. The divide between cultures of different incubation times suggests a compositional shift in the cultures for all three hosts starting around 48 hours, as we observed in the relative abundances of the cultures, albeit with less certainty. Another feature of some interest in the PCoA plot is the amount of variation accounted for by the two coordinates. In this case, the differences in the cultures account for 55 to 57% of the total variation in the samples, plastic gardening pots while differences between cultures and plaque account for approximately 22 – 23% of the variation. The difference between Streptococcus dominance and Veillonella/Prevotella dominance is clearly the largest contributing to inter-sample variation. However, the two coordinates in PCoA only account for about 80% of the total variation, which puts into question what constitutes the other 20%.

To help answer this questions, we constructed a dot plot of OTUs that make up at least 1% of the relative abundance in rarefied samples , and performed Principal Component Analysis on the relative abundances . From the dot plot, we found that the most abundant OTUs in plaque samples came from 7 genus-level taxa , two of which include members known for early and middle colonization of the plaque community and one of which contains members that have been shown to co-aggregate with organisms involved in all stages of colonization. In cultures, we observe the trends present in the relative abundance bar plots and the PCoA plots: compositions of 12- and 24-hour cultures are dominated by Streptococcus OTUs , then transition to Veillonella OTUs in 48-, 96-, and 168-hour cultures, with the simultaneous rise of Prevotella OTUs at 96 and 168 hours. Of the Streptococcus and Veillonella OTUs, a single OTU from each genus dominates at certain points in time while the other OTUs tend to persist at lower abundances. On the other hand, only one Prevotella OTU plays a role in the temporal cultures. Interestingly, neither this Prevotella OTU nor a Fusobacterium OTU appears much in the cultures until 168 hours of incubation. In addition, a Megasphaera OTU also begins appearing between 48 and 96 hours of incubation. Other OTUs with somewhat substantial presence in plaque samples Pseudomonas OTU0027, Corynebacterium OTU0017, Actinobacillus OTU0019, and Acinetobacter OTU0030 – seemed to have been selected out of the temporal cultures, as they do not appear in abundances higher than 1%, and all except for the Actinobacillus OTU disappears between 96 and 168 hours of incubation. Based previous research on the order of succession in human oral microbiome formation, we see that these temporalcultures were potentially transitioning into later or late colonization stages at 168 hours, with the rise in relative abundance of the Fusobacterium OTU. Here, extending the time incubated beyond 168 hours and/or regular re-inoculation with host-specific plaque would help us greatly in probing whether such a transition is occurring in vitro. For the Principal Component Analysis , we first performed it on untransformed relative abundances of plaque and culture samples. The resultss show distinct clusters much like in PCoA , with plaque samples situated between cultures of different incubation times. The underlying factors that contribute toward differential clustering seem to originate from a division between Streptococcus OTU0002 and Veillonella OTU0003. As expected based on results in the dot plot, Pseudomonas OTU0027 contributes to the difference between plaque samples and culture samples , though the other prominent plaque OTUs from the Corynebacterium, Actinobacillus, and Acinetobacter genera surprisingly do not seem to contribute as much as the Pseudomonas OTU does. When colored by host, the cultures display no visibly distinct grouping, though duplicate plaque samples of individual hosts stay quite close to each other . We then performed centered-log-ratio transformation on the relative abundances of plaque and culture samples, and performed PCA again on the transformed data. CLR is commonly used to take the simplex space of the relative abundance data of the sample – the nature of any data that sum to a value of 1, or 100, for any individual sample – into real Euclidean space, hence making valid any distance metrics and statistical method that can be applied to data in Euclidean space . Because PCA typically needs to operate in a real Euclidean space to avoid artifacts and spurious patterns, CLR allows us to perform PCA on the data set in a much more statistically valid manner. Mathematically, this transformation is done by using the log function on a ratio between a sample and the geometric mean of the sample, as shown in Eq. 4 . Results of PCA on the CLR-transformed relative abundances show that plaque and cultures cluster in groups similar to those in PCoA and PCA of untransformed relative abundances . As shown in the analyses above, the differences across hosts do not play a large role in accounting for the variation in the samples, but incubation time does.

Removing homopolymers and chimeras is especially important in this step

The Nanopore and PacBio methods also tend to have high error rates, the lower limit being approximately 4.9% for insertion errors on the Nanopore platforms, and the higher limit being 11% for insertion and/or deletion on the PacBio platforms. The third platform, the Ion Torrent, outputs reads of 200bp and 400bp on two different machines at a relatively fast rate, but are prone to indel errors and homopolymers longer than 6bp. The length and historically high error rates of these three platforms make them unsuitable for the study of microbiome compositions by short genetic markers, though recent literature has shown that an optimized bioinformatics pipeline on the improved PacBio platform can achieve less than 0.01% error rate for full-length 16S rRNA sequences, though the read lengths of the PacBio platform remain suboptimal for experiments involving comparisons of large microbial communities. The fourth HTS platform comes from Illumina and has been indispensable for the study of microbiomes because of its high sequencing depth, speed of operation, cost efficiency, and low error rates. Illumina technology is based on clonal amplification on a glass surface and detection by cyclic reversible termination. Bases are detected by a coupled-charge device camera, square planter pots and the fluorescent signal incorporated into the incoming base can be easily cleaved after imaging. The most common error on Illumina platforms is substitution, and the error rate was already reduced to less than 1% morethan 10 years ago.

The MiSeq platform has been shown to be particularly suitable for studies of microbiome composition because it has low costs and short run times while still taking full advantage of the species-level differences in the variable regions of bacterial 16S rRNA component of the ribosomal 30S subunit. The species-level specificity of the conserved 16S rRNA region enables the classi- fication of organisms into operational taxonomic units , which serve as effective proxies for taxonomic levels. Great efforts have been made in determining the variable regions most suitable for distinguishing among microorganisms, and in creating the most efficient universal primers, for sequencing bacterial 16S rRNA . As universal primers and various error-correction software packages have been developed and refined, research on the human microbiome has experienced exponential growth. Previously undetectable, undifferentiable, and uncultivatable strains have been identified and classified, now aggregated into the particularly notable effort of the Human Microbiome Project, which has culminated in extensive records and repositories of the memberships, distributions, and interactions of human microbiota, including those of the oral microbiota. The exponentially growing research on the human microbiome has thrown into doubt onto the idea that a single organism or factor is responsible for the manifestation of a disease. The ability to compare community compositions between healthy and diseased hosts marked the beginning of a deepening understanding of human wellness, where the concept has emerged that some diseases may originate from the imbalance of microbial communities rather than from the actions of single organism or a group of organisms.

This idea has taken hold in therapeutic approaches, most notably in faecal transplantations to treat obesity and some forms of inflammatory bowel diseases, though treatment of some other diseases such as Crohn’s Disease by fecal transplantation resulted in mild side effects. However, to date, no oral microbiome transplantation has yet been attempted, despite the ample evidence linking disrupted oralmicrobiota to many systemic diseases. There seems to be some efforts in this direction, shown by an exceedingly recent publication detailing a protocol for developing and characterizing an oral microbiome transplant, but no results as of December 2021. The length of time between the initial studies of the oral microbiome and the potential application of oral microbiome transplantations, or between the efforts toward fecal microbiome transplantations and the current efforts toward oral microbiome transplantations, could be explained by a need for pilot studies or for observations of long-term effects. An in vitro model of the dental plaque microbiome and/or the oral microbiome that can be readily generated, easily modified, and rapidly tested would contribute significantly to the research efforts on the therapeutic potential of the oral microbiome transplantation approach.As HTS using 16S rRNA markers rose prominently as the dominant approach for identifying microorganisms and characterizing bacterial community compositions, so did the need for assessment, quality control, and optimization of the sequencing process. From the first step of the sequencing process – the amplification step – there is a need for effective assessment of the quality of universal primers. Designing universal primers that are simultaneously specific for a variable region of the bacterial 16S rRNA and are able to capture different bacterial species and strains is no small feat – it’s been shown that even a single mismatch between the primer and template can lead to thousand-fold misrepresentation of the abundance of the sequence.

These differences in primer efficiency and specificity can result in distortions in the apparent relative abundances of a community, and much effort has been devoted to optimizing primer pairs that capture the widest range of organisms and exhibit the highest specificity and reproducibility. Errors in other steps of the HTS process, such as PCR amplification and incomplete reactions during sequencing also affect the quality of the sequencing data. Raw sequences, therefore, need to be checked and filtered, and various tools have been constructed for these purposes. Some tools are built into software packages such as mothur, QIIME, and DADA2; others were created as standalone insertables that can be introduced into workflows. In addition to checking the quality of the raw reads, investigators have adopted internal sequencing standards as part of routine procedure as quality control for each sequencing run. After preprocessing raw reads to retain only high quality reads, identifying and classifying members of a community come next. A number of methods have been developed to this end, most of which fall into either phylotyping or OTU clustering. Phylotyping assigns reads into bins based on read homology and reference sequences such as those in the Human Oral Microbiome Database; OTU clustering, which frequently uses the naïve Bayesian classifier, assigns reads into bins based on distances between reads, with percent similarity cutoffs. These two approaches can be used to complement each other, as phylotyping has difficulty treating unknown organisms or incomplete sequences in the databases and OTU clustering can exhibit ambiguities that result from ill-defined percent similarities, especially when sequencing errors lead to spurious OTUs and overestimated diversity. The identification of organisms in allows for comparisons of different organisms within a sample and between samples. Members and their abundances within a sample are collectively known as the “alpha diversity” of that sample, whereas the differences between memberships and abundances across two or more samples are collectively known as “beta diversity” . Different indices have been developed to quantify these two types of diversities. Currently, there exist, for both types of diversity, indices that only account for the absence and presence of members as well as indices that account for membership and their distribution . A number of alpha diversity indices have been particularly popular in microbiome studies, including Simpson’s index, which represents the probability that two individuals randomly drawn from a sample belong to the same type; and Shannon index, which quantifies the probability of predicting the identity of the next individual drawn from a sample, based on the sample size and relative abundances. Simpson’s index is frequently used in its reciprocal form, called Inverse Simpson’s index, which represents the effective number of types. Inverse Simpson’s index is more influenced by dominant OTUs while Shannon index is more influenced by rare OTUs, so they are often used in conjunction to examine different aspects of diversity in a community. Interestingly, square pot beta-diversity indices seem to be infrequently used in microbiome research. Instead, the community has taken to using variance and distance measures in the characterization of core microbiome across different body sites and different hosts and the comparison of microbiome compositions in healthy and diseased states. A common way to assess beta diversity in microbiome research is using inter-sample distance measurements. In these types of analyses, abundance data is sorted into matrices with samples as rows and species or OTUs as columns. Distance measures representing dissimilarities between pairs of samples are computed, and the resulting triangular matrix is used for ordination approaches such as Principal Coordinate Analysis and/or Principal Component Analysis.

A number of different distance measurements have been adopted for these ordination approaches. Distance measurements, like alpha diversity indices, include those that consider membership only, those that consider membership and abundance, and those that consider phylogenetic relatedness, though the distance measurements that account for membership only are not as commonly used. Of the most frequently used indices, Bray-Curtis and weighted UniFrac, account for the abundance as well as the presence and absence of taxa, and UniFrac also considers phylogenetic relatedness. As for the implementation of these indices in ordination techniques, PCoA uses distance matrices to construct clusters of similar samples, and PCA uses distance matrices as well as ma-trix transformations to visualize sample similarities. Both techniques reduce the dimensionality represented by the large number of OTUs in microbiome datasets. In many cases, PCA and PCoA reduce the dimensionality to two or three dimensions for ease of visualization and elucidation of the major factors underlying inter-group and inter-sample similarities. The purpose of beta-diversity assessment and ordination is most often to delineate the relationships among samples, hosts, or other meta-data groupings. Of course, more rigorous statistical testing can be performed with microbiome datasets. Currently popular approaches stem from multivariate statistics, as conventional statistics are based on count data and absolute values in Euclidean space instead of relative abundance data in simplex space where abundances sum to 1 or some other constant. In simplex spaces, conventional statistical procedures such as the t-test and ANOVA can lead to high false discovery rates, in many cases because of the assumption that the underlying population distribution adopts a predefined shape . Efforts to circumvent the distributional assumption problems have led to approaches such as ANOSIM – analysis of similarities, for which no underlying population distribution is assumed but the null hypothesis of “no differences exist among samples or groupings” can still be tested; ANCOM – analysis of composition of microbiomes – an approach that also does not rely on distributions and can be implemented in linear model frameworks; and PERMANOVA, an approach based on analysis of variance that is independent of underlying distributions as well as metric distances but can still partition variances based on any distance measure. As will be evident later, the data from this project is best analyzed with PCoA, PCA, and some limited use of ANOSIM.Over the decades, there have been efforts to construct in vitro models of the human oral microbiome. Much of this effort has focused on generating laboratory conditions that most closely match those in in vivo environments. Like microorganisms in other environmental niches, human oral microorganisms form biofilms to increase their physical proximity to one another, allowing for inter-strain and inter-species cooperation for biomass accumulation and against environmental fluctuations. Hence, most oral microbiome models have been designed to promote biofilm formation. These models vary in device shape, substrate type, media composition, incubation time, and species in the inoculum. Some models pre-condition culture plates with artificial pellicle, paying little heed to the exact characteristics of the substrate surface ; others, in addition to the artificial pellicle, use substrates with surface properties similar to those of oral surfaces. Some models adopt continuous flow devices or rotating devices to mimic the salivary sheer forces in the host oral cavity; others forego this aspect of the oral cavity. Some models use host communities to inoculate the cultures; others use a number of laboratory strains to form the inoculum. Most models use media components intended for fastidious organisms – brain heart infusion, media constituted from various components that supply different amino acids, pig mucin as a major carbon source – and receive supplements such as vitamins and siderophores. Some models have complex designs that try to mimic the oral cavity while permitting non-invasive sampling and regular or continuous measurement; others seek the minimal equipment necessary for generating a model community. Despite the ample number of different models, not many longitudinal ones have been developed; those that have incubation periods of longer than 48 hours tend to use devices that supply a near-constant stream of nutrients and saliva, and little research has been done to generate a flow-device-free, fermenter-free model. Because of the lack of research in this area, we aimed to devote a considerable portion of our project to temporally extending the 24-well cultivation method with the longitudinal component to maximally simplify the procedures while retaining reproducibility.

Similar results were found for HT soybeans at the time of their introduction

China will likely be one of the first countries in the world to commercialize GM rice. In the United States, the two most widely visible, potentially commercially viable transgenic rice cultivars are Roundup Ready® rice by Monsanto and LibertyLink® by Bayer CropScience . Both are HT varieties—the former is resistant to Roundup® and the latter to Liberty® , both nonselective herbicides able to control a broad spectrum of weeds . Glyphosate is currently registered for rice in California but not widely utilized while glufosinate is not registered [California Department of Pesticide Regulation ]. As such, it is unlikely that local weeds have developed a natural resistance to these chemicals, unlike, for example, bensulfuron methyl . In 1999, LibertyLink® rice cleared biosafety tests by USDA’s Animal and Plant Health Inspection Service but is not commercially available at this time . The primary direct effects of HT transgenic-rice adoption on the cost structure of California rice growers are reductions in herbicide material and application costs and the likely increased cost of transgenic seed. An HT cultivar differs from conventional seed in that a particular gene has been inserted into the rice plant that renders the species relatively unharmed by a particular active chemical ingredient, thus allowing application of broad-spectrum herbicides directly to the entire planting area . This has the potential to simplify overall weed management strategies and to decrease both the number of active ingredients applied to a particular acreage and the number of applications of any one herbicide, blueberry container thus decreasing weed-management costs.

Reduced chemical use provides the major cost saving for growers. Similarly, herbicide application costs per acre depend on the specific chemical involved and the means of application. Typically, application by ground is 60 to 80 percent more expensive than aerial applications . For this study, other pest-management practices and fertilizer applications are assumed not to change with adoption of HT rice. The cost of transgenic rice seed will be greater than that of conventional seed because companies that sell transgenic varieties typically charge a premium to recoup their research investment costs.8 Based on Roundup Ready® corn and soybeans as a reference point, the technology fee is approximately 30 to 60 percent of conventional seed costs per acre . Seed price premiums are in a similar range for Bt corn varieties . In addition to the technology fee, seed costs for transgenic rice will likely change as a result of the California Rice Certification Act of 2000 signed by Governor Gray Davis in September 2000. With the full support of CRC, the CRCA provides the framework for a voluntary certification program run by the industry, offering assurances of varietal purity, area of origin, and certification of non-GM rice . A second, mandatory provision of the CRCA involves classification of rice varieties that have “characteristics of commercial impact,” defined as “characteristics that may adversely affect the marketability of rice in the event of commingling with other rice and may include, but are not limited to, those characteristics that cannot be visually identified without the aid of specialized equipment or testing, those characteristics that create a significant economic impact in their removal from commingled rice, and those characteristics whose removal from commingled rice is infeasible” . Under this legislation, any person selling seed deemed to have characteristics of commercial impact, which would include anytransgenic cultivars, must pay an assessment “not to exceed five dollars per hundredweight.”

This fee is currently assessed at $0.33 per cwt with specific conditions for planting and handling divided into two tiers .10 In addition, the first handler of rice having these characteristics will pay an assessment of $0.10 per cwt . The $0.33 seed assessment is approximately 2.4 percent of average seed costs while the $0.10 fee represents 1.5 percent of average output price. A portion of these assessments is likely to be passed to the grower, depending on the relative elasticities of supply and demand in the seed and milling markets. In addition to generating cost savings, cultivation of HT rice will affect revenues as well. Net returns will be positively correlated with transgenic yield improvements. HT crops are not engineered to increase yields; rather, they are designed to prevent yield losses arising from pest or weed infestation. As such, potential yield gains depend on the degree of the pest and/or weed problem and the efficacy of the HT treatment relative to the alternatives. Many adopters of transgenic corn, cotton, canola, and soybeans have experienced positive yield effects on the order of 0 to 20 percent . However, under more ideal conditions, a yield drag may occur if the cultivar exhibiting the genetic trait is not the highest-yielding variety or if the gene or gene-insertion process affects potential yields . Field tests of LibertyLink® in California have generally found a yield drag of between 5 and 10 percent relative to traditional medium-grain M-202 varieties . To the extent that a yield drag actually exists in the field, it is expected to quickly dissipate over time as a greater number of varieties with the HT trait become available.Another effect of GM rice cultivation on California growers’ returns is the potential development of price premia for conventional medium-grain rice varieties in world rice markets. Despite the predictions and evidence of producer financial benefits from transgenic crops, there is demand uncertainty in world grain markets, especially in the European Union and Japan . Although challenged by many of the major transgenic-crop producing countries , the EU has prohibited imports of new GM crops.

Many other countries have varying GMcrop threshold labeling regulations, including China, Japan, the Republic of Korea, the Russian Federation, and Thailand . These regulations have the potential to ensure that there is some demand for non-GM grain. Due to segregation requirements and the higher unit cost of production of non-GM crops, this introduces the potential for a price premium for non-GM rice. As a result, nonadopters may indirectly benefit from the introduction of transgenic rice. There is good evidence that foreign regulations have affected export demand for transgenic crops, but there is mixed evidence of price premia for traditional non-GM grains. For example, after the United States started growing GM corn, EU corn imports from the United States dropped from 2.1 million metric tons in 1995 to just under 22,000 metric tons by 2002 [USDA, Foreign Agricultural Service 2003b]. Notably, however, the gap in U.S. corn sales to the EU was filled by Argentina, a transgenic producer that only grows varieties approved by the EU . On the other hand, imports of U.S. corn byproducts to the EU have dropped only slightly since 1995 . The U.S. GM soybean export share in Europe has suffered as well, declining by more than 50 percent since 1997 . Price premia exist for non-U.S. corn in Japan and the Republic of Korea, traditional soybeans in Japan, and non-transgenic corn at elevators in the U.S., typically ranging from 3 to 8 percent . However, there is little evidence for price differentials between the GM and non-GM product in the canola market . The global market for rice differs from the market for soybeans in that the majority of rice sold is for human consumption rather than for animal feed. As a result, the market-acceptance issue is likely to be a key determinant of the success of transgenic rice adoption in California . As can be seen in Table 1, the export market for California rice accounts for approximately one-third to one-half of total annual production with Japan and Turkey as the major destinations. California Japonica rice imported by Japan is channeled through a quota system that was negotiated at the Uruguay Round in 1995. Most of California’s rice exports are purchased by the Japanese government and used for food aid and for other industrial uses, including food and beverage processing . Only a small portion of this imported high-quality rice is released into the domestic Japanese market .

Turkey is reportedly attempting to severely restrict imports of transgenic crops through health regulations, despite importing corn and soybeans from the United States , growing blueberries in containers while Japan requires labeling of 44 crop products that contain more than 5 percent transgenic material as one of the top three ingredients . Currently, several varieties of HT and viral resistant rice have entered the Japanese regulatory system for testing but have not yet been approved for food or feed use . As an illustration of potential market resistance, Monsanto suffered setbacks in Japan in December 2002 when local prefecture authorities withdrew from a collaborative study to develop a transgenicrice cultivar after being presented with a petition from 580,000 Japanese citizens . In 2002, China imposed additional restrictions on transgenic crops, including safety tests and import labeling . However, this action may be nothing more than a trade barrier to reduce soybean imports from the United States. In addition, China is worried that introducing biotech food crops may jeopardize trade with the EU. Nevertheless, China is not taking a back seat in transgenic crop research, as it has a major ongoing research program on biotech rice and other crops and is predicted to be an early adopter . There is also some skepticism in the United States with regard to GM crops. Aventis was sued in 2000 over accidental contamination of taco shells by transgenic corn that was not approved for human consumption, resulting in an expensive food recall. The company subsequently decided to destroy its 2001 LibertyLink® rice crop rather than risk its potential export to hostilenations . Kellogg Company and Coors Brewing Company have publicly stated that they have no plans to use transgenic rice in their products due to fears of consumer rejection, and several consumer and environmental groups favor labeling of foods made from transgenic crops . For most food and beverage products manufactured by these companies, however, rice accounts for a small input cost share, resulting in little financial incentive to support GM crop technology. In May 2004, Monsanto announced that it was pulling out of GM wheat research in North America, partly due to consumer resistance. This has important implications for commercialization of GM rice because both grains are predominantly food crops. Many California rice farmers are concerned over the confusion regarding GM crops and do not want to jeopardize export market sales. This fear has been exacerbated by Measure D on the November 2004 ballot in a major rice-producing county that would have prohibited farmers from growing GM crops. A 2001 survey of California growers performed by the University of California Cooperative Extension showed that, of the respondents, 24 percent planned to use transgenic varieties, 37 percent would not, and the remainder were undecided . Of those growers who answered “no,” 78 percent responded that market concerns were a reason. Nevertheless, if profitability at the farm level increases, it is likely that a subset of California producers will adopt the technology . Presumably, those with the most significant weed problems and hence the highest costs would be the first to adopt.UCCE produces detailed cost and return studies for a wide variety of crops produced in California, including “Rice Only” and “Rice in Rotation.” The studies are specific to the Sacramento Valley region where virtually all California rice is produced. Figures on herbicide applications are based on actual use data as reported by DPR and UC Integrated Pest Management Guidelines . The most recent study completed for rice is by Williams et al. and is used as the basis for this study. As the potential adoption of transgenic rice is unlikely to significantly change farm overhead expenses on average, we focus on returns and operating costs per acre as reported in the sample-costs document. However, given weed-resistance evolution, changing regulations from DPR, and changes in the 2002 Farm Bill, the baseline cost scenario is adjusted here to account for changes in herbicide-use patterns, prices of herbicides and rice, and projected government payments. Using information from the 1999 pesticide use report compiled by DPR, the 2001 sample costs assume applications of bensulfuron and triclopyr, both broadleaf herbicides, on 25 and 30 percent of the acreage, respectively, and applications of the grass herbicides molinate and methyl parathion on 75 and 45 percent, respectively, of the acreage. These figures are updated using data from Rice Pesticide Use and Surface Water Monitoring, a 2002 report by DPR, as interpreted by the authors.

The neural activity can be measured using invasive or noninvasive techniques

In the second experiment, a factorial design was employed with one factor being the dosage of JA, and the second being the dosage of ACC. If JA and ACC did not interact to affect a variable, then only the main effects of JA and ACC were considered. If JA and ACC interacted to affect a variable, then the nature of the interaction was determined, and the data were summarized in two-way tables.The Internet of Things is increasingly used by normal people. There will be 50 billion Internet of Thing devices by the 2030. More and more people have started to adopt and use the Internet of Thing devices in everyday life. This thesis aims to explore and study the possibility of implementing and using electroencephalography as controller in the Internet of Thing environment. Also, this thesis intends to study and integrate the human emotion with the Internet of Things Framework . This chapter introduces what Brain Computer Interface is and discusses the components of the Brain Computer Interface. In addition, this chapter explores some of the techniques used to measure the brain activity. Finally, this chapter discusses this research questions of this thesis.Brain Computer Interface is a communication method that depends on the neural activity generated by the brain regardless of peripheral nerves and muscles. BCI aims to provide a new channel of output for the brain controlled and adapted by the user. There are many Brain Computer Interface applications that can be implemented, such as applications for disabled people to interact with computational devices, blueberry container applications for gamers to play games with their thoughts, social applications to capture feelings and emotions, and application for human brain activities.

It is a medical imaging technique used to capture high quality pictures of the anatomy and physiological processes of the body. It uses powerful magnet, radio waves, and field gradients to generate images of the body. It is non-invasive, painless and does not use radiation . It can provide very detailed high resolution images of a body parts. In particular, it can capture very detailed high resolution images for the brain compared to other imaging techniques such as CT and X-ray because of its ability to differentiate between soft tissues of the body. However, due to the magnet effects, metallic items are not allowed during the scan which because they limit its applications.It is a special MRI technology that measures brain activity by detecting changes associated with blood flow. This technique relies on coupled cerebral blood flow and neuronal activation. The blood flow increases in a region when this region is in use. The idea of this technique lies in the amount of oxygenated and deoxygenated hemoglobin changes in the blood flow during the neural activity. The most common one is Blood Oxygenation Level Dependent fMRI which measures the ratio of Oxy-Hb to Deoxy-Hb in order to measure the oxygen consumption of active neurons. It is also invasive and has an excellent spatial resolution compared to EEG, and records signals from all the brain regions. However, this technique has the same limitations of the MRI technique.It is the technique used to measure the magnetic field over the head generated by the electric current in the brain. The most commonly used technology of MEG currently is SQUIDs. This technique allows capturing MEG of the head efficiently and rapidly. Also, this technique is non-invasive and can be used as complement for other techniques such as EEG and fMRI. Due to the fact that MEG uses magnetic fields,this technique makes less distortion than the electric fields. However, the same restriction applied on fMRI and MRI can be applied to MEG due the to its sensitivity for ferromagnetic.

An electroencephalogram is a method monitoring the electrical activity of the brain using small flat metal discs placed on the scalp. EEG measures voltage fluctuations resulting from brain cells communications via electrical impulse. In Particular when neurons are activated, ions such as Na+ , K+ and CI– are pushed through the neuron membrane. EEG is a weak signal and needs to be amplified in order to be displayed or stored on a computer. Two approaches to recording the EEG signal are invasive and non-invasive. In the invasive approach, the electrode is implanted inside the human brain, which requires surgery. In a non-invasive approach, electrodes are placed on the surface of the skull, which have many benefits such as risk free, easy setting, and repeating measurement. In addition, it is more favorable in developing and designing application for normal people. The focus of this thesis will be based on this non-invasive EEG technique .The first question of this thesis is how to integrate the low-quality cheap EEG headset, which has only one electrode located in forehead, with an Internet of Things framework. In order to do this we have to first build and design the EEG Server which is able to translate EEG signals into commands. Then, we must build an algorithm that construct different patterns from these commands, and these patterns will be used to control different Internet of Things devices. The expected outcome after integration and build, the different EEG pattern is the ability to control Internet of Thing devices such as Light turning on/off, music playing and etc.The second question of this thesis is how to build the EEG Edge which is able to classify between eye close and eye open states. In order to answer this question we will use an extension of the Internet of things framework that supports intelligent edge, which is presented in [38].

So, in order to build the EEG Edge we need to extract EEG features for different subjects and build the model that is able to classify between open and close eye states. There are different types of features that could be extracted from EEG raw signal. However, for this application we need only to extract the power spectrum density features. Lastly, we need to define the feature extraction extension which will contain the EEG features and define the execution extension which will contain the classifier model. The expected outcome after the integration will be the ability to classify eye states on the edge.The third question of this thesis is how to build a model that is able to detect and classify positive and negative emotions. In order to classify the emotions, different factors must be considered, which include participants, stimuli, the temporal window, and EEG features. Different EEG features will be extracted from EEG raw signal and these features include time domain features, frequency domain features, and nonlinear features. Different video clips will be used as stimuli in order to trigger different emotions. The expected outcome will be the ability to classify two different types of emotions, positive and negative emotions.EEG is the electrical activity measurement in the brain. The first measure for EEG was recored by Has Berger in 1924 using galvanometer. Based on the internal brain behavior or external stimulus, EEG varies in amplitude and frequency of the wave. The system contains a EEG headset, and this thesis used ”NeuroSky Mindwave Mobile” which is using Bluetooth connection to transfer the EEG signal. The EEG receiver records and receives the EEG signal coming from a EEG headset which is written in Python. I used the Wukong framework to deploy WuClass for the EEG, and WuClass for a controller on Intel Edison and Raspberry Pi. Figure shows the system architecture will be used in this thesis.There are a lot of commercial EEG headsets from the simplest ones to the more sophisticated one. Table compares different EEG headset. These different EEG headsets are able to capture different mental states, and different facial expressions. Both emotiv headsets, the EPOC+ and INSIGHT, capture excitement, frustration, engagement, meditation and affinity. Also, Emotiv headsets capture EEG bands which are Delta, Theta, Alpha, Beta, and Gamma. In addition, Emotiv headsets capture some facial expressions such as blinking, smiling, clenching teeth, growing blueberries in containers laughing and smirking. On the other hand, the Neurosky Mindwave Mobile is limited to capturing only two mental states which are meditation and relaxation. Finally, the Muse headset can capture positive and negative emotions. Also, the Muse headset captures EEG bands which are Delta, Theta, Alpha, Beta, and Gamma. In addition, the Muse headset also captures some facial expressions such as jaw clenching andeye blinking. Emotive EPOC+ sensors use saline soaked felt pad technology, and emotive INSIGHT sensors use long-life semi-dry polymer technology. Neurosky Mindwave Mobile and Muse sensors use long life dry technology.NeuroSky Mindwave Mobile consists of eight parts which are ear clip, ear arm, battery area, power switch, adjustable head band, sensor tip, sensor arm, and think gear chip. The operation of this device is based on two sensors to detect and filter EEG signals. The sensor tip on the forehead detects the electrical signal from the frontal lobe of the brain. The second sensor is an ear clip which is used as ground to filter out the electrical noise. Figure shows NeuroSky Mindwave Mobile and Figure shows the electrode position of NeuroSky Mindwave Mobile.

This thesis uses NeuroSky Mindwave Mobile for many reasons. First, this project aims to offer a low-cost system, which can be used by everyone. Second, NeuroSky Mindwave Mobile is highly resistant to noise and its signal is digitized before it is transmitted throughBluetooth. Third, NeuroSky Mindwave Mobile offers unencrypted EEG signal compared to the Emotive and Muse which are encrypted.It is NeuroSky algorithm to characterize mental states. This algorithm applied on the remaining signal that is acquired from removing the noise and the muscle movements of the raw brain wave signals. Two eSense signal are produced as a result of this algorithm: attention and meditation signals. These signals detect the concentration and relaxation of subject. The values of these signals range from 0-100 in which zero indicates low in concentration or in low in relaxation, and 100 indicates high in concentration or high in relaxation.One major limitation is the accuracy of the EEG signal captured by NeuroSky Mindwave Mobile, because the NeuroSky Mindwave Mobile Mobile has only one electrode which is FP1. The problem with FP1 is its susceptibility to a lot of noise coming from eye movement and muscle movement. Another possible issue is comfort. One subject claims it is uncomfortable to wear. This is most likely due to the rigid headband design as well as the need for the ear clip as the reference sensor.In order to detect eye blinking, OpenCV library is used which is an open-source library of programming functions aimed to offer real-time computer vision. This library is used to detect eye blinking to trigger the system, hold on to a certain state, and change between different states. The algorithm used to classify blinking is Haar Cascades classifier which is a machine learning approach in which the cascade function is trained by a lot of negative and positive pictures, then used to detect objects in other image. In order to obtain EEG signal from Neurosky Mindwave Mobile I used an open-source API written in Python suggested by the Neurosky company. Two major libraries used are bluetooth headset.py and parser.py. The bluetooth headset library contains methods to connect the Mindwave Mobile to the computer via Bluetooth either by specifying a MAC address or by automatically searching for a device named ”Mindwave Mobile”. If it does not find a device automatically or with the MAC address specified, it will raise an error. The other library parser.py is specific to NeuroSky Mindwave Mobile device. There are two major classes in this library: ThinkGearParser and Time Series Recorder. It must first create a new Time Series Recorder object and then include this object into a Think Gear Parser object, which will package the EEG information and be able to display the data on the computer. The other important library that is necessary for WuKong integration is a socket which creates a new socket that points to the exact IP address of the peripheral device . In order to control a device Wuclass has to be designed. For this project, EEG Server is Wuclass that is able to receive the EEG signal and transform to different actions. Triggering an action will take around 10 seconds as shown in Figure . Figure explains and shows the flow of data and control of the system in Flow based program .

Trees on dwarfing rootstocks are smaller than those on standard or seedling rootstocks

It is noteworthy that at harvest only two transcription factors were differentially expressed, both showing higher expressions in T fruits and in the case of the ortholog of PAP2/IAA27, also at 1 week of cold storage . SlIAA27 silencing results in greater auxin sensitivity in tomato. Moreover, a gain-of-function mutation in IAA16 confers poorer responses to auxins and ABA in Arabidopsis. Thus, it is likely that high levels of these genes at harvest contribute to delay the ripening program or protect fruits LS during cold storage, at least at the beginning of cold storage. The analysis of the expression profiles during cold of the genes differentially expressed in M fruits resulted in important and unexpected expression characteristics. In fruits LS, these genes behaved like ripening genes and were able to continue with the ripening program in the cold in fruits LS, while the ripening expression of other ripening genes was normally halted , which is not the case of high sensitive fruits. The ability of cold to stop fruit ripening has been previously reported, even if no details of how this happens at the molecular level have yet been provided. Although we have no hypothesis about why these genes continued with the ripening program in the cold , we believe that this may be because these genes are part of the adaptation mechanism or simply reflected that LS fruits perform better in the cold than S fruits. In apples the ability to set up ripening during cold seems to be an adaptative mechanism to shorten ripening time in colder autumns. On the other hand, 25 liter square pot this unexpected behavior of some of the genes differentially expressed at harvest indicates that they not only can form part of a mechanism for the interaction between endogenous and exogenous signals, they could also be able to contribute to mealiness in response to cold stress.

In light of this, it is interesting to remember that environmental/ripening stage/cultural preharvest practices have a strong effect on CI sensitivity during the post harvest which, together with the genetic background, may be responsible for the differences noted in the M stage that condition the cold response.Fruit trees differ from landscape trees in that they are best kept relatively small to facilitate routine pruning, fruit thinning, managing pests, and harvesting fruit from the ground or a ladder. Fruit trees that are allowed to grow above a manageable height produce excessive fruit, leading to branch breakage, smaller-size fruit, and, in some cases, pest problems . Most fruit trees are trained to the open center system and are topped annually to reduce limb breakage. However,some fruit trees lend themselves to central leader training, so these considerations are less important. The major problem with those that do grow very tall, however, is that the fruit are borne higher in the tree and the lower branches become shaded. This results in the decline of these branches and ultimately renders them fruitless. Unlike fruits, nuts are knocked, shaken, or allowed to fall. They are not usually picked by hand, so tree height is less important. Some size control is necessary for preventing branch failures and maximizing nut production, because large trees are more difficult to knock. Pruning of nut trees generally consists of thinning or cutting back selected branches to suitable lateral branches. Walnut and pistachio trees should be trained to a modified central leader to maximize fruit production and maintain a branch structure that can support the nut crop .The best strategy for keeping trees relatively small is to use a dwarfing rootstock when available. However, semidwarf rootstocks differ in their ability to cause dwarfing, and many semidwarf trees sold in retail nurseries are only slightly dwarfing. For example, apple rootstocks can range from about 80 percent of the size of a standard tree to about 60 percent to about 30 percent . Therefore, some semidwarfs are still, practically speaking, full-size trees. Other fruit species do not have this range of dwarfing rootstocks available, and most are only slightly dwarfing. For the stone fruits, such as peaches and nectarines, the dwarfing rootstock most commonly available is Citation, which produces a tree that is somewhat smaller.

Citrus can be dwarfed to approximately 50 percent by growing trees on the ‘Flying Dragon’ trifoliate orange rootstock. However, availability of trees grafted to ‘Flying Dragon’ is limited due to the very slow growth of grafted trees. Genetic dwarf trees are very easy to manage and are aesthetically pleasing in the landscape . They naturally produce short internodes and are usually planted on standard rootstocks. A limited number of genetic dwarf varieties are available for almond, apple, apricot, nectarine, and peach.Deciduous fruit and nut trees are ideally planted bare root, but containerized trees can also be used. All bare-root trees intended solely for fruit or almond production should be headed 18 to 24 inches above the ground at planting to force low branching; walnuts and pecans should be headed higher. If this were not done, the first laterals would typically form around 5 to 6 feet above the ground, growth would be weak, and much of the fruit would be out of reach from the ground. It is important to develop a new leader in headed trees if the central leader method is to be maintained. Select one of the shoots that grow near the heading cut, and tie it to a stake in an upright position if it is not growing upright naturally. Additional pruning may be needed to eliminate branch crowding or prevent codominant trunks from forming. Higher branching may be desirable for fruit trees in some urban settings to allow for maintenance of vegetation under and around the tree. In areas where deer are a problem, lower branching may not be practical without proper protection. Containerized fruit trees are often planted in spring or summer, so they cannot be headed without removing all the foliage. Either leave the tree as it was headed in the nursery, or make the lower heading cut in the next dormant season. In hot regions where afternoon sun hits the trunk, apply a white interior latex paint diluted 50-50 with water to the trunk to prevent sunburn injury.Each fruit and nut species has a preferred training method based on the species’ growth habits and fruiting characteristics. Ideally, most of the fruit should be produced low in the tree to facilitate fruit thinning, pest management, and harvest. Because fruit is produced on spurs or 1-year-old branches that require sunlight for flower development, direct sun must penetrate into the lower portions of the canopy for fruit production low in the tree. Nearly all fruits and nuts are borne mainly on spurs, but peach and nectarine fruit are borne only on 1-year-old shoots that grew the previous summer. Most stone fruits and almonds are best trained to the “open center” or “open vase” method, where the center of the tree is routinely kept free of vigorous shoots. In this manner, lower fruiting branches receive sufficient light through the tree’s center. Apples and pears can also be trained to the open center system but are better adapted to central leader training, where lateral branches are trained outward from a vertical leader allowing sunlight to penetrate from the sides.

Persimmons are also well adapted to central leader training. For apples and pears it may be prudent to develop two or three leaders in case fire blight kills one of the leaders; lateral branches are directed to the outside of the tree. Apples and pears can also be espalier trained. This method involves pruning the tree to form a narrow, gallon pot flat plane on a trellis or against a wall or fence. Permanent, horizontal branches that resemble cordons on grapevines are selected to produce the fruiting spurs. Walnuts, pistachios, and persimmons can also be initially trained with a central leader, but then trees are allowed to develop a natural rounded crown; this method is referred to as “modified central leader” training. Fig trees can also be trained using this method or using open center training, and they can be kept fairly short or allowed to grow tall, or they can even be espalier trained. Modified central leader training can be used for pomegranates, but their rangy growth and constant root suckering make them better adapted to a system that allows them to grow into a large multit-runked bush. Pruning them typically involves heading back and thinning vigorous upright branches and removing old trunks or scaffold branches to rejuvenate trees. Citrus trees can be allowed to grow with little training except to eliminate scaffold branches with narrow crotch angles. Manage water sprouts by heading, shortening to a lower lateral, or, in some cases, completely removing. Save water sprouts that bend over, ultimately contributing to the typical mounding citrus canopy. Remove all root stock suckers; they often grow up the center and are difficult to see. Painting the trunk white not only helps to prevent sunburn but will also make root stock suckers easier to spot. Over time, the shaded inner fruiting branches of citrus trees die, and fruit production moves to the top and sides. This characteristic is considered acceptable for citrus. Citrus trees can also be hedged, and they are very adaptable to espalier training. Nearly all species can be trained as fruit bushes, using a method in which trees are trained in the first and second year by heading shoots when they reach about 2 feet in length. The resulting new shoots are headed again, and this is followed each time by some thinning of shoots as well. Once the desired tree height is achieved , pruning consists of removing shoots above the desired height and thinning remaining shoots and branches about twice a year. Nearly all pruning on fruit bushes should be done in the growing season to reduce their vigor, but touch-up pruning is useful in winter, when branch structure is more visible. Apricots and cherries should be pruned only in late summer, when dry weather is predicted for an extended period. These species are susceptible to branch canker diseases, caused mainly by Eutypa and Botryosphaeria fungi, which infect branch injuries made before or during wet weather or periods of very high humidity. However, most other fruit and nut trees can be pruned any time enough leaves have fallen to make the tree structure visible.After planting and heading bare-root stone fruit trees, two options are possible for the resulting shoots. Either they are allowed to grow through the summer, or training can begin during the first growing season by selecting three or four well-placed shoots when they are 1 to 3 feet long and heading back all other shoots to 4 to 6 inches. By winter, the primary scaffold branches are selected and headed, and all other upright branches are removed. Continue to develop the tree to a vase shape over the next 2 years. Ideally, each primary scaffold should branch into two secondary branches, which, in turn, branch into two tertiary branches. Prune out vigorous upright shoots in the tree’s center in winter, and maintain the open center by removing vigorous upright shoots once during summer. In winter, thin fruiting branches to reduce fruit load and minimize the need for fruit thinning. Head the trees back to about the same height every year, preferably to a height that can be reached using only a short ladder . With almond trees, scaffold branches are selected as with stone fruit trees. After that, however, the opening in the center of the tree can be somewhat narrower than stone fruit trees. Annual pruning involves thinning branches to avoid overcrowding.Apples, pears, persimmons, and pecan trees are best trained to develop a central leader similar to those in many shade and ornamental trees . Lateral branches grow outward from the leader, either in tiers of approximately four branches each or spaced fairly uniformly up and around the trunk. Rather than simply allowing the trunk to continue growing naturally after planting, the trunk is headed at about 18 to 24 inches above the ground, and the most vigorous and upright shoot that develops is selected to become the new leader. This practice is done to force the first tier of four lateral branches below the heading cut. When the new leader has grown about 2.5 feet, it is then headed back about 6 inches.

Fleshy fruit are a relatively recent evolutionary innovation

This family has been delimited into four ‘‘Classes’’, and 4 of the 17 members of the Class I AtHBs have been shown to be involved in ABA responses across diverse tissues . In addition, the expression of three AtHB6, 7, and 12, have been show to be up-regulated by ABA . An examination of the grape genome identified 10 orthologs that cluster with the Class 1 HBs . The PP2C protein phosphatases represent another large gene family being made up of 80 genes in Arabidopsis . Within this family group a cluster of genes containing many genes that have been characterized to function in the ABA-signaling pathway; most notably, the ABA-insensitive mutants, abi1 and abi2 . In addition, AtPP2C-A, AtHAB1, AtHAB2, and AtAHG1, also members of Group A, function in ABA signaling across diverse tissues . In Arabidopsis, all the members of this group are induced by ABA treatment. In grape, nine VvPP2Cs clustered in group A . The WRKY transcription factors are a large gene family, consisting of approximately 70 members making up three groups in Arabidopsis . The barley HvSUSIBA2, and AtWRKY2, and AtWRKY34, all fall within the same group consisting of 14 members in Arabidopsis . HvSUSIBA2 modulates the expression of a barley isoamylase gene during seed development via the binding of SURE elements . In addition, Sun et al. demonstrated that expression of HvSUSIBA2 is induced by exogenous sugar and its native expression profile during seed development correlates strongly with endogenous sucrose levels.

Hammargren et al. found that the sugar responsiveness of a nucleoside diphosphate kinase is altered in Atwrky2 and Atwrky34 mutant backgrounds. Grape contained 13 putative orthologs that fall within this group .Expression profiling was carried out in berry skins of fieldgrown Cabernet Sauvignon in order to identify those orthologs expressed during ripening. In addition, black plastic planting pots expression profiles under both control- and deficit-irrigated conditions were compared in order to identify orthologs whose expression pattern reflected the advancement of ripening under ED. Water deficits were applied continuously from fruit set until to the onset of ripening, resulting in an average difference in midday leaf water potential of 0.36 MPa before the onset of ripening and no difference during ripening . Of the 67 orthologous genes identified, 38 were expressed in grape berries. A summary of the expression profiles of all the genes examined in this study can be found in Figs. 4 and 5 while more detailed expression data is contained in Suppl. File 1. The majority of these genes were differentially regulated during berry development, with 26 exhibiting statistically significant changes with time in control and/or ED . There were few statistically significant differences in the magnitude of expression between control and ED . Six genes exhibited statistically significant differences between control and ED. Four of these instances, VvHB8, VvSnRK5, VvPP2C-3, and VvPP2C-7, all exhibit elevated levels of expression in ED at or just prior to the onset of ripening . Eight of the VvWRKYs selected for analysis were expressed in ripening grape. VvWRKY3, 5, and 6, were all differentially expressed during ripening and exhibited similar patterns of expression . They were up-regulated ranging from 4- to 16-fold at the onset of ripening. There were no significant differences in the expression of VvWRKY1, 2, 16, 18, and 19 across development or during water deficit . Of the 10 Class I, VvHB orthologs only four are expressed in fruit during ripening . Both VvHB4 and 8 were strongly up-regulated at the onset of ripening, exhibiting increases of [16-fold. VvHB8 is up-regulated much earlier under water deficit and high levels persist until late in ripening. VvHB4 is down-regulated early in development under ED. VvHB2 expression in controls generally decreased during development with a small up-regulation at the onset of ripening, although these changes were not statistically significant.

Under ED, however, this pattern of expression is more pronounced with a sharp eight fold decrease in expression at 81 DAA . VvHB3 was constitutively expressed during ripening with no significant changes over time or under ED. Six of the VvPP2Cs were expressed in grape berries. VvPP2C-3, 6, 7, and 9 were all differentially expressed during ripening, while VvPP2C-1 and 5 expression did not change significantly . VvPP2C-3, 6, 7, and 9 were all up-regulated strongly at the onset of ripening increasing as much as 16-fold. VvPP2C-3 and VvPP2C-7 expression were clearly induced earlier and to higher levels in ED. Both exhibited statistically significant two to fourfold greater levels of expression in green berries at the onset of ripening.In order to more directly test the effects of sugar and ABA on the onset of ripening, immature Cabernet Sauvignon berries were harvested from the field at 61 DAA and cultured in the presence of various combinations of sucrose and ABA until 84 DAA, a period of 23 days. The onset of ripening in the clusters from which these berries were collected in the field occurred at approximately 73 DAA, therefore the cultured berries were collected approximately 12 days prior to the onset of ripening. Ripening phenomena were induced in berry culture when treated with sucrose and ABA as evidenced by changes in color, softening, and gene expression. Berries treated with 10% sucrose and various ABA concentrations changed color while those treated with 2 or 10% sucrose alone remained green . 200 lM ABA and 2% sucrose ? 200 lM ABA treatments were included in our analyses but yielded no results because of a phenomenon where the berries exploded reproducibly . On average, cultured berries gained weight over the culturing period . Sucrose treatments of 2 and 10% showed the greatest weight gains corresponding to gains of 21 and 8%, respectively. Berries cultured in the presence of 10% sucrose with the addition of various ABA concentrations showed average weight gains of approximately 4%. Previous studies have found that a precipitous drop in grape berry elasticity occurs just prior to the onset of ripening in grape . In our current culture experiments, berry elasticity remained equal to that at T0 in the 2 and 10% sucrose treatments, while sharply decreasing with ABA treatment . Finally, the grape Myb transcription factor VvMybA1 was utilized as a molecular marker for the onset of ripening. VvMybA1 is responsible for activating anthocyanin biosynthesis and has a distinct pattern of expression; being absent prior to the onset of ripening at which time it is strongly up-regulated . In berry skins, VvMybA1 expression was completely absent from the 2 and 10% sucrose treatments and was strongly up-regulated in the 10% sucrose ? ABA treatments as expected . We hypothesized that orthologs of gene families regulated by sugar and ABA, whose expression was strongly up-regulated at the onset of ripening and advanced under ED, would be regulated similarly by sugar and/or ABA in cultured berries. To test this, changes in the expression of VvHB4, VvHB8, VvPP2C-3, and VvPP2C-6 were investigated in skins of cultured berries. In the field, all genes were strongly up-regulated at the onset of ripening and advanced under ED and in berry culture, expression was strongly induced in the presence of 10% sucrose ? ABA when compared with treatments of 2 and 10% sucrose alone . Among those genes analyzed, the magnitude of induction in the field versus in berry culture was variable. For example, when data from Fig. 7 was expressed as fold change both VvHB4 and VvPP2C-3 were induced 10-fold from 57 to 74 DAA in the field compared to 6- and 40-fold in culture, respectively.The transcriptional data in this study demonstrate that numerous sugar and ABA-signaling orthologs are expressed during ripening in grape, black plastic pots for plants and identify novel candidates in the control of non-climacteric fruit ripening. Several genes exhibited patterns of expression correlating with sugar and ABA accumulation at the onset of ripening in field-grown fruit. Changes in color, softening, and gene expression analogous to the onset of ripening in the field were induced in berry culture when treated with sucrose and ABA, demonstrating their role in controlling the onset of ripening. This study shows that many orthologous sugar and ABA-signaling components are regulated in fleshy fruit similar to their regulation originally characterized in model systems across diverse processes.These genes are easily delimited through nesting the currently available grape sequences comprising a gene family within their corresponding family in Arabidopsis. However, the current grape genome assembly, and its annotated proteome, certainly does not identify all the genes present in the grape genome so our analyses most likely failed to identify some orthologs. In the current study, we chose to use QPCR in our expression analyses instead of the current grape microarray for several reasons. First, the majority of the genes analyzed here are not present on the current Affymetrix Vitis vinifera gene chip since the chip was derived from ESTsand the present study utilizes the complete genome. Second, microarrays suffer from several limitations one of which is that they are particularly insensitive in quantifying low abundant transcripts .

Transcription factors are a largely low abundance transcripts, and a study in Arabidopsis comparing QPCR and Affymetrix microarray approaches found that the microarray could detect \55% of 1,400 transcription factors tested, compared to [85% via QPCR . Furthermore, cross-hybridization is common on microarrays, which is especially problematic when considering conserved gene families like those examined here. However, those genes represented on the chip were identified and expression profiles were compared with those determined via microarray analyses in two recent studies. Notably, Koyama et al. demonstrated via microarray that a VvHB transcription factor , identical to VvHB8 in this study, is also up-regulated at the onset of ripening and in response to exogenous sugar and ABA. For several other genes, expression profiles in the current study are nearly identical to those found by Deluc et al. . Many grape orthologs of genes shown previously to be modulated by sugar and/or ABA in model systems exhibited expression patterns during ripening, and in response to water deficit, consistent with their modulation by sugar and/or ABA in grape. More specifically, these genes are induced at the onset of ripening and induced earlier, and to higher levels, under water deficit. This characteristic pattern of expression is shared with many flavonoid pathway genes , and these correlations in expression throughout ripening suggest common regulatory mechanisms. Experiments in berry culture demonstrated that several sugar and ABA-signaling orthologs, and VvMybA1, a transcriptional activator of anthocyanin biosynthesis, are up-regulated by exogenous ABA in the presence of high sucrose. These data suggest that sugar and ABA play a predominant role in regulating the expression of a suite of genes at the onset of ripening, including those responsible for anthocyanin biosynthesis and components of their own signaling pathways. These results have interesting evolutionary implications demonstrating that some orthologs are consistently regulated by sugar and ABA across diverse developmental processes. Land plants, in general, have undergone abundant gene duplication through their evolutionary history , although there is debate on the exact nature and timing of these events across angiosperms and in grape specifically . Gene duplication is considered cornerstone to providing the raw material for evolution. A duplicate gene, now no longer essential, can undergo changes in its structure and/or regulation allowing for it to take up a novel role. The results of this study show that some of the Group A PP2Cs and Class I HBs have maintained their ABA responsiveness during fruit ripening in grape. At least with regard to ABA responsiveness, the nature of regulation has been conserved, but co-opted into a completely different developmental context. This may provide for the discovery of novel cis-regulatory elements through promoter sequence comparisons across species. In grape, advances in elucidating molecular mechanisms suffer from a lack of transgenic and related technologies on which most reverse genetic studies are based. This study demonstrates that model systems can provide fundamental knowledge and insight into function of other agronomically important plant species even in extremely divergent developmental processes. Equally, this suggests that Arabidopsis, with its wealth of tools available for facilitating reverse genetic studies, may provide a valuable system to characterize genes of interest from grape or other crop species. Already there are several examples of the successful characterization of grape genes in Arabidopsis and tobacco . This could prove especially useful considering the limitations of functional genetic analyses in perennial fruit crops where long propagation times areprohibitive. Future studies should include attempts to complement ABA and sugar signaling mutants in Arabidopsis and other model species with their orthologs implicated in ripening.

Connect the coarse positioner control cable to the cryostat

At high tuning fork amplitudes, interactions between the nanoSQUID tip and the surface can produce local variations in oscillation amplitude and appear as parasitic signals at the tuning fork frequency. Of course, the nanoSQUID is highly sensitive to local temperature, so systems with thermal gradients will generally have backgrounds associated with that. But by far the most important parasitic contrast mechanism in the nanoSQUID campaigns discussed here is electric field contrast through parasitic Coulomb blockade.Below I have included a set of instructions for execution of a nanoSQUID magnetic imaging campaign using the instruments in Andrea Young’s lab. It may be useful if you are operating or building a nanoSQUID microscope in a different lab, but I would like to emphasize that the instructions below are merely sufficient for getting the nanoSQUID sensor to a sample, they are almost certainly not optimized for expediency. I’m sure that as the technology matures many steps will be rendered superfluous. Of course, if you’re using the nanoSQUID to study a bulk material and not a microscopic heterostructure, navigation is not necessary and you will be able to skip most of these instructions. A nanoSQUID imaging campaign can begin when the microscope is cooled down and all of the necessary systems are operational. One must check that: The tuning fork has a good resonance with Q 1000, plant pot with drainage the phase-locked loop inside the Zurich lock-in amplifier locks, and you can find an AC tuning fork excitation at which clicking “Set PLL Threshold” produces a 0.25 Hz standard deviation.

If you are in Andrea Young’s lab and not some other institution running a nanoSQUID microscope, remember that this custom tuning fork amplifier needs 5 V, not 15 V, unlike most of our custom electronics. The tip has been characterized and is a SQUID. Sensitivity is good enough that magnetic field noise is 25 nT/rtHz . The SQUID interference pattern looks reasonably healthy and corresponds to a diameter that is close to the SEM diameter . It is important to remember that it is possible for the Josephson junctions producing nanoSQUIDs to end up higher on the sensor. These might produce healthy SQUIDs but will not be useful for scanning, and discovery of this failure mode comes dangerously late in the campaign, so SQUIDs high up on the pipette are very destructive failure modes. This failure mode is uncommon but worth remembering. If you have access to a vector magnet, such SQUIDs also usually have large cross sections to in-plane magnetic flux, and this can be useful for identifying them and filtering them out. The capacitances of the Attocube fine positioners are = µF. These scanners have a range of µm. They creep significantly more than the piezoelectric scanners used in most commercial STM systems, but their large range is quite useful. Damage to the scanners or the associated wiring will appear as deviations from these capacitances. Small variations around these values are fine. After you are done testing these capacitances, reconnect them. Make sure you’re testing the scanner/cryostat side of the wiring, not the outputs of the box- this is a common silly mistake that can lead to unwarranted panic. If you’re working in Andrea Young’s lab, make sure the Z piezo is ungrounded . If for whatever reason current can flow through the circuit while you’re probing the capacitance, you will see the capacitance rise and then saturate above the range of the multimeter.

Because the nanoSQUID is a sharp piece of metal that will be in close contact with other pieces of metal, it sometimes makes sense to ground the nanoSQUID circuit to the top gate of a device, or metallic contacts to a crystal, to prevent electrostatic discharge while scanning or upon touchdown. If you have decided to set up such a circuit, make sure that the sample, the gates, and the nanoSQUID circuit are all simultaneously grounded. If you forget to float one of these circuits and bias the SQUID or gate the device, you can accidentally pump destructive amounts of current through the nanoSQUID or device. However, you must make sure that the z piezoelectric scanner is not grounded. You can now begin your approach to the surface. You should ground the nanoSQUID and the device. If you are in Andrea’s lab, verify that the three high current DB-9 cables going from the coarse positioner controller box to the box-to-cable adapter are plugged in in the correct positions. The cables for each channel all have the same connectors, so it is possible to mix up the x, y, and z axes of the coarse positioners. This is a very destructive mistake, because you will not be advancing to the surface and will likely crash the nanoSQUID into a wirebond, or some other feature away from the device. The remaining instructions assume you are using the nanoSQUID control software developed in Andrea’s lab, primarily by Marec Serlin and Trevor Arp. The software is a complete and self-contained scanning probe microscopy control system and user interface based on Python 3and PyQT. Open the coarse positioner control module. Click the small capacitor symbol. You should hear a little click and see 200 nF next to the symbol . The system has sent a pulse of AC voltage to the coarse positioners; the click comes from the piezoelectric crystal moving in response. Check that you see a number around 1000 µm in the resistive encoder window for axis 3 . Note whether you see a number around 2000-3000 µm in the windows for axis 1 and axis 2. If you are in Andrea’s lab, it is possible that you will not for axis 2.

Axis 2 has had problems with its resistive encoder calibration curve at low temperature. The issue seems to be an inaccurate LUT file in the firmware; new firmware can be uploaded using Attocube’s Daisy software. It is not a significant issue if you cannot use the axis 1 and 2 resistive encoders; however, it is critical that there be an accurate number for axis 3. Set the output voltage frequency to be somewhere in the range 5-25 Hz . Set the output voltage to 50 V to start . Make sure that the check box next to Output is checked. Move 10 µm toward the sample . If Axis 3 doesn’t move, don’t panic! It’s usually the case that the coarse positioners are sticky after cooling down the probe before they’ve been used. Try moving backwards and forwards, then increase the voltage to 55 V, then 60 V. Once they’re moving, decrease the voltage back to 50 V. Note the PLL behavior- if there’s a software issue and pulses aren’t being sent, you won’t see activity in the PLL associated with the coarse positioners. Under normal circumstances you should see considerable crosstalk between the PLL and the coarse positioners while the coarse positioners are firing. There are significant transients in the resistive encoder readings after firing the coarse positioners; this is likely a result of heating, but could also have a contribution from mechanical settling and creep. We have observed that the decay times of transients are significantly longer in the 300 mK system than in the 1.5 K or 4 K systems, likely indicating that these transients are largely limited by heat dissipation, pot with drainage holes at least at very low temperatures. Go into the General Approach Settings of the Approach Control window. There’s a setting in there for coarse positioner step size- set that to 4 µm or so. This is the amount the coarse positioners will attempt to move between fine scanner extensions. They always overshoot this number . Overshooting is of course dangerous because it can produce crashes if it is too egregious. In the Approach Control window, click Set PLL Threshold, verify that standard deviation of frequency is 0.25 Hz. Enter 5 µm into the height window. Verify that Z is ungrounded . Click Constant Height. Check that the PID is producing an approach speed of 100 nm/s. It is important that you sit and watch the first few rounds of coarse positioner approach. This is boring, but it is important the first few coarse positioning steps often cause the tuning fork to settle and change, which can cause the approach to accelerate or fail. Also by observing this part of the process you can often find simple, obvious issues that you’ve overlooked while setting up the approach. Getting to the surface will take several hours. Typically you’ll want to leave during this time. When you return, the tip should be at constant height. I’d recommend clicking constant height again and approaching to contact again to verify that you’re at the surface. You should be between 10 µm and 20 µm from the surface. It may be necessary to withdraw, approach with the coarse positioners a few µm, and then approach again to ensure you have enough scanner range in the z direction. Click withdraw until you’re fully withdrawn. Click Frustrate Feedback to enable scanning with tip withdrawn. I will present instructions as if you are attempting to navigate to a device through which you can flow current. This will generate gradients in temperature from dissipation and ambient magnetic fields through the Biot-Savart law, both of which the nanoSQUID sensor can detect. I strongly recommend that you navigate with thermal gradients if at all possible. The magnetic field is a signed quantity, so you need to have a pretty strong model and a clear picture of your starting location to successfully use it to navigate.

Thermal gradients can be handled with simple gradient ascent; this will almost always lead you to the region of your circuit with the greatest resistance, which is typically an exfoliated heterostructure if that is what you’re studying. You will likely need to have a helium atmosphere inside the microscope to pursue thermal navigation. A pressure of a few mBar is plenty, but be advised that this may require that you operate at elevated temperatures.Helium 4 has plenty of vapor pressure at 1.5 K, but this is not really an option at 300 mK, and many 300 mK systems struggle with stable operation at any temperature between 300 mK and 4 K. You should run an AC current through your device at finite frequency. Higher frequencies will generally improve the sensitivity of the nanoSQUID, but if the heterostructure has finite resistance the impedance of the device might prevent operation at very high frequency. It’s worth mentioning that the ‘circuit’ you have made has some extremely nonstandard ‘circuit elements’ in it, because it relies on heat conduction and convection from the device through the helium atmosphere to the nanoSQUID. If you don’t know how to compute the frequency-dependent impedance of heat flow through gaseous helium at 1.5K, then that’s fine, because I don’t either! I only mention it because it’s important to keep in mind that just because your electrical circuit isn’t encountering large phase shifts and high impedance, doesn’t mean the thermal signal is getting to your nanoSQUID without significant impedance. I recommend operating at a relatively low frequency for these reasons, as long as the noise floor is tolerable. In practice this generally means a few kHz. I’d also like to point out that if you are applying a current to your device at a frequency ω, then generally the dominant component of the thermal signal detected by the nanoSQUID will be at 2 · ω, because dissipation is symmetric in current direction . Next you will perform your first thermal scan, 10-20 µm above the surface near your first touchdown point. If you have performed a thermal characterization, then pick a region with high thermal sensitivity, but generally this is unnecessary- I usually simply attempt to thermally navigate with a point that has good magnetic sensitivity. Bias the SQUID to a region with good sensitivity. Check the transfer function. Set the second oscillator on the Zurich to a frequency that is low noise . Connect the second output of the Zurich to the trigger of one of the transport lock-ins and trigger the transport lock-in off of it.

Scarlet Royal table grape is one of the major red varieties in California

For RNA-seq analysis, a total of 8 RNA-seq libraries were generated, comprising four biological replicates from each of the two vineyards . The libraries were constructed as previously described using the NEBNext Ultra II RNA Library Prep Kit for Illumina . Subsequently, these libraries were pooled in equal amounts and subjected to paired-end 150-base sequencing on two lanes of the NovaSeq 6000 platform at the Novogene Co., Ltd .Illumina sequencing of the multiplexed RNA-seq libraries resulted in 8 FASTQ files containing sequences, and the dataprocessing followed the methods described in our previous work . In summary, the quality of reads was assessed using FASTQ before and after trimming with Trimmomatic v0.39 . Subsequently, the trimmed reads were quantified using Salmon in non-alignment based mode to estimate transcript abundance . The transcripts were mapped to the Vitis transcriptome file “Vvinifera_457_v2.1.transcript_primaryTranscript Only.fa” extracted from Phytozome database , resulting in a mapping rate higher than 61.9% . To identify differentially expressed genes between V7 and V9 at the sampling point, we utilized the DESeq2 and EdgeR packages with default parameters . For convenience, the DEGs generated by both DESeq2 and EdgeR pipelines, large pot with drainage with a threshold of PFDR<0.05 and log2fold change > 1.5 or < –1.5, were considered as being expressed . For the analysis of Gene Ontology terms, we employed the g:Profiler website with the g:SCS multiple testing correction method, using a significance threshold of 0.05 .

Finally, to visualize the consensus result, the Web-based tool Venny was used .Co-expression network modules were constructed using the variance stabilizing transformation values and the R package WGCNA . Before analyzing the data, lowly expressed genes among all sample types were removed by DESeq2, and the remaining non-lowly expressed genes of the 8 samples were used in module construction. The co-expression modules were obtained using the default settings, except that the soft threshold power was set to 9, TOMType was set to signed, minModuleSize was set to 30, mergeCutHeight was set to 0.25, and scale-free topology fit index was set to 0.8 . A module eigengene value, which summarizes the expression profile of a given module as the first principal component, was calculated and used to evaluate the association of modules with berry biochemical characteristics of V7-berries and V9-berries at the fifth sampling time . The resultant final WGCNA matrix had 42 modules with 17,553 genes. The module membership and gene significance values were calculated, subsequently the intramodular hub genes were identified . Despite the premium fruit quality of the variety, in some cases, an undesirable taste was observed under certain unknown circumstances. To gain comprehensive insights into the development of the occasional berry astringency of Scarlet Royal and understand the underlying mechanism of this phenomenon, berries were investigated at two contrasting vineyards , both following the same commercial cultural practices. However,leaf petioles analysis of grapes from both vineyards showed considerable differences in nutrient levels, especially in the primary macronutrients .

During both seasons, the amount of nitrogen in the form of nitrate in LP-V9 was roughly 2 to 3 times higher than the normal levels, in contrast to its counterpart in LP-V7, which slightly accumulated more or less N. Similarly, LP-V9 contained higher percentages of phosphorus and potassium compared to LP-V7 . Conversely, the amounts of secondary macronutrients, calcium and magnesium , in LP-V7 were within the normal range but greater than LP-V9, which showed Mg deficiency in the first year only. Regarding the micronutrients, their levels were mainly within or around the normal range at both vineyards and during both seasons, with some differences . For example, zinc was slightly higher in LP-V9, especially in the first year. On the contrary, manganese and chlorine were roughly 2 times higher in V7 . Similarly, soil analysis shoed a higher level of nitrogen, potassium and magnesium . However, no significant difference was observed in all other soil macro and micronutrients. During the two seasons of the study, we determined the total marketable yield and the number of clusters in both vineyards. Our data revealed a higher yield in V7 compared to V9 in 2016 and 2017, respectively. The lower yield in V9 can likely be attributed to the smaller number of clusters in V9 compared to V7 during 2016 and 2017. To monitor the changes in the biochemical composition of Scarlet Royal berries, V7 and V9 berries were periodically sampled at six time points from veraison until the end of the season . The obtained data showed that berry polyphenols exhibited discernible patterns in both vineyards, most notably during the ripening stage . Of special interest were the tannin compounds, which widely affect organoleptic properties such as astringency and bitterness . Our data showed that berries from both V7 and V9 vineyards maintained lower levels of tannin from veraison up to the middle of August . Subsequently, a significant gradual increment of tannin took place. However, only V9-berries showed consistent accumulation of tannin over the two studied seasons compared to V7-berries, where the significant induction occurred only during the first season.

It is worth noting that the levels of tannin were lower in both vineyards during the second year compared to the first season. Nevertheless, they were more pronounced in V9-berries compared to V7-berries, with roughly 2- to 4.5-fold increases by the end of the harvesting time during the two seasons, respectively . The patterns of catechin and quercetin glycosides were inconsistent during both seasons, particularly within V7-berries . During the first year, for instance, the levels of catechin were similar in both vineyards, showing a dramatic increase only by the end of the season . In contrast, during the second year, such induction of catechin was exclusively restricted to V9-berries, starting from time S3 . For quercetin glycosides, V7-berries exhibited significantly higher amounts at early stages during both seasons relative to V9-berries . However, subsequent amounts were comparable in both vineyards during the first season only , but not in the second one, where V7-berries showed a significant drop at the last sample S6 . Interestingly, the levels of quercetin glycosides were roughly equal at the last V9-berries sample between both seasons despite such inconsistency. For total anthocyanins , the levels in early samples were comparable in both vineyards and seasons . Afterwards, their pattern started to vary between V7 and V9 within the same season, as well as from the first season to the second, as the nutrient amounts fluctuated as well . Nevertheless, TAC accumulation was positively correlated with the progress of ripening in V7-berries, but not V9-berries. To further confirm our data, we measured these phenolic compounds for the third time in mid-September of the next year . Overall, the results showed that the patterns of tannins and TAC were reciprocally inverted between V7-berries and V9-berries as ripening advanced. In addition, both catechin and quercetin glycosides most likely followed the pattern of tannins despite their seasonal fluctuations. To further distinguish V7-berries and V9-berries and assess their astringency development, a panel test was performed using samples at three commercial harvest times . A group of 12 nontechnical panelists scored berry astringency on a scale from 1 to 7, square pot where 1 is extremely low and 7 is extremely high. The panelists were trained using samples from contrasting standard varieties, including Flame Seedless and Crimson as non-astringent and Vintage Red known for its astringent taste . The results showed that V7-berries exhibited lower intensity of astringency compared to V9-berries . As ripening proceeded, astringency levels increased in V9-berries, but decreased in V7-berries. Moreover, we collected samples from clusters with various astringent taste and measured its tannins content. We were able to determine that the threshold level of tannins that causes the Scarlet Royal astringency taste is around 400 mg/L . Taking into account the levels of polyphenol compounds and the taste panel data together , it is evident that astringency development is positively associated with tannins’ accumulation throughout the ripening process of V9-berries. Nevertheless, organoleptic analysis revealed a significant difference in the berries of the two vineyards, particularly in terms of total soluble solids and titratable acidity . Notably, V9 berries exhibited higher titratable acidity and lower total soluble solids, especially in the later stages . It’s worth noting that the weight of V9 berries is also higher than that of V7 .To better understand the molecular events associated with the induction of tannins and astringency upon ripening, the berry transcriptome profile was analyzed in both V7-berries and V9- berries at the late commercial harvest date . Following the quality and quantity check, extracted RNA from quadruplicate samples was deeply sequenced . Of the 19.7 to 24.4 million high-quality clean reads per replicate, 61.9% to 66.1% were mapped against the V. vinifera transcriptome . Hierarchical clustering of the RNAseq data showed explicit changes in the berry transcriptome profile between V7- berries and V9-berries . The Principal Component Analysis showed high consistency among biological replicates . Samples were mainly separated along the first component , which was responsible for 97% of the variance, and was definitely associated with the site ofcultivation; V7 and V9. In contrast, the second component was trivial, accounting for only 1% of the variance and was probably attributed to experimental error. Such results were expected, as berry samples came from the same cultivar, Scarlet Royal , and the only difference between them was the vineyard locations.

To identify the differentially expressed genes in V7- berries and V9-berries at this specific time within the ripening window, the RNAseq data were analyzed using two different Bioconductor packages, DESeq2, and EdgeR . Subsequently, the DEGs with FDR < 0.05 and log2fold change > 1.5 or < –1.5 generated by both pipelines were considered . The pairwise comparison between berry transcriptomes resulted in 2134 DEGs, with 1514 up-regulated and 620 down-regulated . The data manifested the impact of the cultivation site on the transcriptional reprogramming of a large number of genes that ultimately affect berry quality. Most apparently, at the V9 vineyard, where roughly 2.5-fold higher number of berry transcripts were upregulated compared to V7 . Subsequently, the enrichment of Gene Ontology terms and Kyoto Encyclopedia of Genes and Genomes pathways were analyzed among the up- and down-regulated DEGs using the Vitis vinifera Ensembl GeneID . Among the significantly enriched GO terms, the up-regulated transcripts in V9-berries exhibited high enrichment in the molecular function GO terms for quercetin 3-O-glucosyltransferase activity and quercetin 7-Oglucosyltransferase activity . Additionally, the V9-berries induced DEGs were highly enriched in the biological process GO terms for the jasmonic acid signaling pathway and cellular response , Lphenylalanine metabolic process , L-phenylalanine biosynthetic process , and nitrogen compound metabolic process . Similarly, these DEGs were highly enriched in the KEGG pathways for the biosynthesis of secondary metabolites and phenylpropanoid biosynthesis . On the other hand, the down-regulated transcripts in V9-berries showed substantial augmentation in the MF GO terms for hormone binding , abscisic acid binding , and potassium ion transmembrane transporter activity . Correspondingly, the BP GO terms for hormone-mediated signaling pathway and response , auxin-activated signaling, cellular response, and homeostasis , abscisic acid-activated signaling, response, and cellular response , response to strigolactone , potassium ion transmembrane transport , and potassium ion transport , as well as the KEGG pathways for plant hormone signal transduction , brassinosteroid biosynthesis , and carotenoid biosynthesis were highly enriched in the down-regulated genes of V9-berries . Overall, the transcriptome analysis pointed out the substantial changes in transcript abundance that coordinate and reflect the observed induction of tannins/astringency during the maturation and ripening of V9-berries compared to the V7-berries .To elucidate which fundamental processes were altered during tannins/astringency induction within berries, the Weighted Gene CoExpression Network Analysis was applied to construct coexpression networks. Forty-two modules were identified based on pairwise correlations among the 17553 non-lowly expressed genes . Subsequently, the biochemical data from both V7-berries and V9-berries were correlated to the WGCNA modules, and only 2 modules, M21 and M30, displayed substantial correlations with berry polyphenols, containing 5349 and 4559 genes, respectively . The M21 module was positively linked with TAC , but negatively associated with tannins, catechin, and quercetin glycosides . On the contrary, the M30 module exhibited a positive correlation with tannins, catechin, and quercetin glycosides , but was negatively linked with TAC .