In the non-farm labor market the three employment concepts yield similar results

California has the largest and most complex agricultural labor market in the United States, reflecting seasonal employment demands, the predominance of immigrant workers and the significant role of labor contractors in matching workers and jobs. Whether measured in sales, production or acres, California agriculture expanded in the 1990s . Farm sales reached $27 billion in 2000, with about 77 million tons of crops produced on 8.8 million acres. More than half of these sales were in fruits and nuts, vegetables and melons, and horticultural specialties , such as flowers and nursery products. Rising yields meant that more tons of vegetables were produced from the same acreage, while acreage of fruits and nuts rose from 2 million acres in 1990 to 2.4 million acres in 2000, a 19% increase over the 1990s. Many FVH commodities are labor intensive, with labor accounting for 15% to 35% of production costs. Most of the workers employed on FVH farms are immigrants from Mexico, and a significant percentage are believed to be unauthorized . In recent years, several proposals have aimed to reduce unauthorized worker employment in agriculture . In September 2001, Mexican President Vincente Fox called for a U.S.-Mexico labor migration agreement so that “there are no Mexicans who have not entered this country [U.S.] legally, and that those Mexicans who come into the country do so with proper documents. Regularization does not mean rewarding those who break the law. Regularization means that we give legal rights to people who are already contributing to this great nation.” President George Bush agreed: “When we find willing employer and willing employee,plastic pot manufacturers we ought to match the two. We ought to make it easier for people who want to employ somebody, who are looking for workers, to be able to hire people who want to work” .

The United States and Mexico appeared close to agreement on a program to legalize farm and other workers before September 11, 2001. However, after the war on terror was declared, the momentum for a newguest-worker program and the legalization of immigrants already in the country slowed. In summer 2003, there were several new proposals for a migration agreement with Mexico to legalize the status of currently unauthorized workers and allow some to earn immigrant status by working and paying taxes in the United States. There is little agreement, however, on what impacts such a program would have on California’s farm labor market. We used a unique database to examine farm employment trends in California agriculture. The data suggests that: about three individuals are employed for each year-round equivalent job, helping to explain low farm worker earnings; there was a shift in the 1990s from crop farmers hiring workers directly to farmers hiring via farm labor contractors ; and there is considerable potential to improve farm labor market efficiency, by using a smaller total workforce with each worker employed more hours and achieving higher earnings.California employers who pay $100 or more in quarterly wages are required to obtain an unemployment insurance reporting number from the California Employment Development Department . The EDD then assigns each employer or reporting unit a four-digit Standard Industrial Classification or, since 2001, a six-digit North American Industry Classification System code that reflects the employer’s major activity . Major activities are grouped in increasing levels of detail; for example, agriculture, forestry and fisheries are classified as a major industrial sector and, within this sector, SIC 01 is assigned to crops, 017 to fruits and nuts and 0172 to grapes. We defined “farm workers” as unique Social Security numbers reported by farm employers to the EDD, and then summed their California jobs and earnings. This enabled us to answer questions such as how many farm and non-farm jobs were associated with a particular SSN or individual in 1 year, and in which commodity or county a person had maximum earnings.

We adjusted the raw data before doing the analysis. Farm employers have reported their employees and earnings each quarter since 1978, when near universal UI coverage was extended to agriculture. Although it is sometimes alleged that farm employers, especially FLCs, do not report all their workers or earnings, there is no evidence that under reporting of employees or earnings is more common in agriculture than in other industries that hire large numbers of seasonal workers, such as construction. We excluded from the analysis SSNs reported by 50 or more employers in 1 year . We also excluded wage records or jobs that had less than $1 in earnings and jobs, or that reported earnings of more than $75,000 in one quarter. These adjustments eliminated from the analysis 2,750 SSNs, 62,571 wage records or jobs and $803 million in earnings. These exclusions were about 0.25%, 2.7% and 6.1% of the totals, respectively, and are documented more fully in Khan et al. . There is no single explanation for the outlier data we excluded. In some cases, several workers may share one SSN, while in others our suspicion that a SSN had “too many” jobs may represent data-entry errors. During the 1990s, the Social Security Administration cleaned up SSNs, including threatening to fine and reject tax payments from employers with too many mismatches between SSNs and the names associated with those SSNs, which should have reduced the number of SSNs reported by employers. We think the rising number of SSNs reflects more individuals employed in agriculture, not more noise in the data.Agricultural employment can be measured in three major ways: at a point in time, as an average over time or by counting the total number of individuals employed over some period of time.If 100 workers are employed during each month and there is no worker turnover from month to month, then point in time, average and total employment is 100. However, agricultural employment during the six summer months may be 150, versus 50 during the six winter months, meaning that point, average and total employment counts differ.

We began with all SSNs reported by agricultural employers , summed the jobs and earnings of these SSNs within each SIC code, and assigned each SSN to the four-digit SIC code in which the worker had the highest earnings. This means that a SSN reported by a grape employer as well as by an FLC would be considered a grape worker if his highest-earning job was in grapes. Farm workers had a total of 1.5 million farm jobs in 1991, 1.7 million in 1996 and 1.8 million in 2001. One-quarter also had at least one non-farm job — about 407,000 workers were both farm and non-farm workers in 1991, 453,000 in 1996 and 697,000 in 2001 . The total California earnings of persons employed in agriculture were $11.1 billion in 1991, $12.0 billion in 1996 and $15.8 billion in 2001 . The share of total earnings for farm workers from agricultural employers was 77% in 1991, 77% in 1996 and 71% in 2001, indicating that in the late 1990s,black plastic plant pots wholesale farm workers tended to increase their supplemental earnings via non-agricultural jobs. Average earnings per job were highest in livestock, $13,800 per job in 2001. There was little difference between average earnings per job in agricultural services and crops . Average earnings per job were higher for the non-farm jobs of agriculture workers than for agriculture jobs .In 2001, California’s farm workers held 2.5 million jobs, including 1.8 million jobs with agricultural employers. These agricultural jobs included 630,000 in crops, 69,000 in livestock and 1.1 million in agricultural services. The agricultural services sector includes both farm and non-farm activities, such as veterinary and lawn and garden services; FLCs accounted for 70% of the employees reported by farm agricultural services. Fruits and nuts accounted for 53% of the crop jobs, dairy for 39% of the livestock jobs and FLCs for 58% of the agricultural services jobs. The major change between 1991 and 2001 was the drop of 54,000 jobs in crop production and increase of 313,000 jobs in agricultural services. We placed SSNs in the detailed commodity or SIC code that reflected the maximum reported earnings for the worker, and considered workers to be primarily employed in the SIC with maximum earnings. In 2001, there were 877,000 primary farm workers, and they included 322,000 reported by crop employers, 50,000 reported by livestock employers and 504,000 reported by agricultural service employers. Fruit and nut employers accounted for 47% of the crop-reported workers, dairy for 40% of the livestock-reported workers and FLCs for 44% of the agricultural services–reported workers. The major change between 1991 and 2001 was the increase in number of SSNs with their primary job in agriculture — from 758,000 to 877,000. There was a slight drop in the number of workers reported by crop employers, a slight increase in livestock workers and a sharp 135,000 increase in agricultural services workers, anchored by a 59,000 increase in workers reported by FLCs in 2001. Most farm workers had only one job. In 2001, 53% of the SSNs were reported by only one employer to the EDD, 26% were reported twice, 12% three times, 5% four times and 4% five or more times.

During the 1990s, about 65% of farm workers were reported by one agricultural employer only, 17% to 21% by two agricultural employers, 5% by at least two agricultural employers and one non-farm employer, and 9% to 12% by one farm and one non-farm employer. In the three-digit SIC codes representing more detailed commodity sectors, 60% to 83% of the employees had only one job. For example, in 2001 79% of the employees reported by dairy farms had one dairy farm job, while 7% also had a second agricultural job — 3% had a dairy job, a second farm job and a non-farm job, and 11% had a non-farm job in addition to the dairy job. About two-thirds of the employees of FLCs and farm management companies had only jobs with one such employer; 22% had another farm job; 6% had an FLC job, another farm job and a non-farm job; and 6% had a non-farm job in addition to the FLC job. Even more detailed four-digit SIC codes showed the same pattern: the commodities or SICs most likely to offer year-round jobs such as dairies and mushrooms had 70% to 80% of employees working only in that commodity, while commodities or SICs offering more seasonal jobs, such as deciduous tree fruits and FLCs, had 53% to 63% of employees working only in that commodity. At the four-digit, SIC-code level, the five largest SICs accounted for about 45% of the agricultural wages reported.Agricultural employers paid a total of $11 billion in wages in 2001, an average of $10,200 per worker . Earnings were highest for the 64,000 workers primarily employed in livestock; they averaged $14,800, followed by those primarily employed by crop employers and those employed by agricultural farm services, custom harvesters and FLCs . There was considerable variation in earnings among workers in agricultural farm services: workers in soil preparation services averaged $21,100 in 2001, versus $12,700 for crop preparation services for market and $4,400 for FLC employees. The average earnings of primarily farm workers varied significantly, even within detailed four-digit SIC codes — in most cases, the standard deviation exceeded the mean wage . Median earnings were generally less than mean earnings, reflecting that higher wage supervisors and farm managers pulled up the mean. If the workers in detailed commodities are ranked from lowest-to-highest paid, the lowest 25% of earners in an SIC category generally earned less than $4,000 a year. For example, among workers primarily employed in vegetables and melons in 2001 , the first quartile or 25th percentile of annual earnings was $3,000. This reflects relatively few hours of work — if these workers earned the state’s minimum wage of $6.25 an hour in 2001, they worked 480 hours. The 25th percentile earnings cutoff was lowest for those employed primarily by FLCs, only $634, suggesting that FLC employees receiving the minimum wage worked 101 hours. The highest 25th percentile mark was in mushrooms , $9,491, which reflects 1,519 hours at minimum wage. The 75th percentile marks the highest earnings that a non-supervisory worker could normally expect to achieve — 75% of workers reported earning less than this amount and 25% earned more.

Drainage water was directed through flexible piping into a large bin installed below ground level

Five hotpots were found in chloroplast genome of Veroniceae , and two universal marker, trnH-psbA and matK were identified, respectively. Then highly variable regions were selected as potential molecular markers for Fritillaria, including ycf1, which was also selected in this study. Sequences of these variable regions founded in this study could be regarded as potential molecular markers for species identification and evolutionary studies and have been shown to be valuable for studies in other groups.Oligonucleotide repeats play an important role for generating indels, inversion and substitutions. Repeat sequences in the chloroplast genome could provide valuable information for understanding not only the sequence divergence but the evolutionary history of the plants. We have detected five types of large repeats in the seven Pulsatilla cp genomes. Among them, the most common repeat types are forward and palindromic repeats, followed by reverse repeats, and only little complement repeats were found in Pulsatilla cp genomes . Most of the repeats were short, ranging from 30–49 bp . We also identified multiple microsatellite repeats, also known as simple sequence repeats or short tandem repeats. Due to their codominant inheritance and high variability, SSRs are robust and effective markers for species identification and population genetic analyses. Most of the mononucleotide repeats were composed of A/T. The other microsatellites types were also dominated by AT/TA, with very little G/C . In this study, plentiful microsatellite loci were found through the comparative analysis of Pulsatilla cp genome sequences. In total,raspberry grow in pots we detected six types of microsatellite based on the comparison of seven Pulsatilla cp genomes . Each Pulsatilla cp genome had 69–87 microsatellites. The lengths of repeat motifs of these microsatellites ranged from 10 to 21 bp .

Among the four structural regions in the cp genomes, most of the repeats and microsatellites were distributed in LSC, followed by SSC, and fewest in IRa/IRb , which were also reported in other studies in angiosperms. These SSRs and repeat sequences are uncorrelated with genome size and phylogenetic position of the species, but will provide important information for further studies of phylogenetic reconstruction and infra- and inter-specifc genetic diversity.Chloroplast genomes have been widely used and have made significant contributions to phylogeny reconstruction at different taxonomic levels in plants. To better clarify the evolutionary relationships within Pulsatilla, we used each data set to construct phylogenetic trees using the ML analytical methods. We also construct phylogenetic trees with those eight highly variable regions using the ML, MP analytical methods. All tree topology structures were identical. Therefore, here we presented the phylogenetic studies using the ML tree with the support values from the MP analyses recorded at the corresponding nodes . The phylogenetic tree based on all data sets from the complete plastid genome sequences yielded the same topology. The phylogenetic tree based on chloroplast genome differed from that of the DNA barcode combination rbcL+matK+trnH-psbA, but with higher support values. The phylogenetic trees based on data from complete plastid genome sequences showed that the species of Pulsatilla formed a monophyletic group which in turn includes two strongly supported clades. One clade comprised P. alpina and P. occidentalis, members of subg. Preonanthus. The other comprised two subclades: members of P. hirsutissima, P. ludoviciana, P. multifdi, P. patens and P. vernalis, and species of P. chinensis, P. dahurica, P. grandis and P. pratensis. All the species of the two subclades are members of the subg. Pulsatilla. These results were congruent with our former results based on universal markers.

In phylogenetic analyses, compared to the combination of barcodes, the full chloroplast genome sequence data formed distinct clades with high bootstrap support, improving the inadequate resolution of barcodes combination. The LSC regions and coding regions have the same topology structures with robust support. However, sequencing of genomic DNA is still expensive. It is necessary to utilize variation within chloroplast regions for rapid species-specific assay. Here we found that phylogenetic inference based on highly variable regions yielded a tree with the same topology as the one recovered based on complete chloroplast genome sequences, demonstrating the high utility of hot spots of variability for species identification and phylogenetic analysis. More samples and laboratory works are needed in the future to increase the number of these variable regions available for study.High frequency irrigation systems involve fastidious planning and complex designs, so that timely and accurate additions of water and fertilizer can result in sustainable irrigation. At the same time these production systems are becoming more intensive, in an effort to optimise the return on expensive and scarce resources such as water and nutrients. Advanced fertigation systems combine drip irrigation and fertilizer application to deliver water and nutrients directly to the roots of crops, with the aim of synchronising the applications with crop demands , and maintaining the desired concentration and distribution of ions and water in the soil . Hence a clear understanding of water dynamics in the soil is important for the design, operation, and management of irrigation and fertigation under drip irrigation . However, there is a need to evaluate the performance of these systems, because considerable localised leaching canoccurnear the driplines, evenunderdeficitirrigation conditions .

The loss of nutrients, particularly nitrogen, from irrigation systems can be expensive and pose a serious threat to receiving water bodies . Citrus is one of the important horticultural crops being grown under advanced fertigation systems in Australia. Fertigation delivers nutrients in a soluble form with irrigation water directly into the root-zone, thus providing ideal conditions for rapid uptake of water and nutrients. Scholberg et al. demonstrated that more frequent applications of a dilute N solution to citrus seedlings doubled nitrogen uptake efficiency compared with less frequent applications of a more concentrated nutrient solution. Delivery of N through fertigation reduces N losses in the soil-plant system by ammonia volatilisation and nitrate leaching . However, poor irrigation management, i.e., an application of water in excess of crop requirements, plus the storage capacity of the soil within the rooting depth, can contribute to leaching of water and nutrients below the root zone. Therefore, optimal irrigation scheduling is important to maximise the uptake efficiencies of water and nutrients . Most of the citrus production along the Murray River corridor is on sandy soils, which are highly vulnerable to rapid leaching of water and nutrients. Nitrogen is the key limiting nutrient and is therefore a main component of fertigation. An increasing use of nitrogenous fertilizers and their subsequent leaching as nitrate from the root zone of cropping systems is recognised as a potential source of groundwater contamination,30 planter pot because the harvested crop seldom takes up more than 25–70% of the total applied fertilizer . Several researchers have reported substantial leaching of applied N under citrus cultivation in field conditions . Similarly, in lysimeter experiments, Boaretto et al. showed 36% recovery of applied nitrogen by orange trees, while Jiang and Xia reported N leaching of 70% of the initial N value, and found denitrification and leaching to be the main processes for the loss of N. These studies suggest that knowledge of the nitrogen balance in cropping systems is essential for designing and managing drip irrigation systems and achieving high efficiency of N fertilizer use, thereby limiting the export of this nutrient as a pollutant to downstream water systems. Quantifying water and nitrogen losses below the root zone is highly challenging due to uncertainties associated with estimating drainage fluxes and solute concentrations in the leachate, even under well-controlled experimental conditions . Moreover, direct field measurements of simultaneous migration of water and nitrogen under drip irrigation is laborious, time-consuming and expensive . Hence simulation models have become valuable research tools for studying the complex and interactive processes of water and solute transport through the soil profile, as well as the effects of management practices on crop yields and on the environment . In fact, models have proved to be particularly useful for describing and predicting transport processes, simulating conditions which are economically or technically impossible to carry out in field experiments . Several models have been developed to simulate flow and transport processes, nutrient uptake and biological transformations of nutrients in the soil .

HYDRUS 2D/3D has been used extensively for evaluating the effects of soil hydraulic properties, soil layering, dripper discharge rates, irrigation frequency and quality, timing of nutrient applications on wetting patterns and solute distribution because it has the capability to analyse water flow and nutrient transport in multiple spatial dimensions . In the absence of experimental data we can use multidimensional models solving water flow and nutrient transport equations to evaluate the multi-dimensional aspect of nitrate movement under fertigation . However, earlier simulation studies have reported contradictory results on nitrate distribution in soils. For example, Cote et al. reported that nitrate application at the beginning of an irrigation cycle reduced the risk of leaching compared to fertigation at the end of the irrigation cycle. On the other hand, Hanson et al. reported that fertigation at the end of an irrigation cycle resulted in a higher nitrogen use efficiency compared to fertigation at the beginning or middle of an irrigation cycle. These studies very well outlined the importance of numerical modelling in the design and management of irrigation and fertigation systems, especially when there is a lack of experimental data on nutrient transport in soils. However, there is still a need to verify the fate of nitrate in soils with horticultural crops and modern irrigation systems. Therefore, a lysimeter was established to observe water movement and drainage under drip irrigated navel orange, and to calibrate the HYDRUS 2D/3D model against collected experimental data. The model was then used, in the absence of experimental data on nitrate, to develop various modelling scenarios to assess the fate of nitrate for different irrigation and fertigation schemes.The study was conducted on a weighing lysimeter assembled and installed at the Loxton Research Centre of the South Australian Research and Development Institute. The lysimeter consisted of a PVC tank located on 1.2 m × 1.2 m pallet scales fitted with 4 × 1 tonne load-cells, and connected to a computerised logging system which logged readings hourly. A specially designed drainage system placed at the bottom of the lysimeter consisted of radially running drainage pipes, which were connected to a pair of parallel pipes, which facilitated a rapid exit of drainage water from the lysimeter . These pipes were covered in a drainage sock and buried in a 25-cmlayer of coarse washed river sand at the base of the lysimeter, which ensured easy flushing of water through the drainage pipe. A layer of geo-textile material was placed over the top of the sand layer to prevent roots growing down into it, as this layer was intended to be only a drainage layer. A healthy young citrus tree was excavated from an orchard at the Loxton Research Centre and transplanted into the lysimeter. A soil profile approximately 85 cm deep was transferred to the tank with the tree and saturated to remove air pockets and to facilitate settling. The final soil surface was around 10 cm below the rim of the tank. Soil samples were collected from0 to 20, 20 to 40, 40 to 60, 60 to 85, and 85 to 110 cm depths to measure bulk density and to carry out particle size analysis. Two months after transplanting, the lysimeter was installed amongst existing trees in the orchard. Measurements were initiated after about six months, in order to enable the plant to adjust to the lysimeter conditions. The lysimeter was equipped with Sentek® EnviroSCAN® logging capacitance soil water sensors installed adjacent to the drip line at depths of 10, 20, 40, 60, and 80 cm to measure changes in the volumetric soil water content.The experimental site was approximately 240 m from an established weather station, which measured air temperature, relative humidity, wind speed , rainfall, and net radiation.Irrigation was applied using 3 pressure compensated emitters with a discharge rate of 4 L h−1. Emitters were located on a circle 25 cm away from the tree trunk at an equal distance from each other . The irrigation schedule was based on the average reference evapotranspiration during the last 10 years at the site, multiplied by the crop coefficient taken from Sluggett .

Decreased sleep efficiency with increased age is a commonly reported association

Delayed sleep onsets may be due to a phase-delay of the circadian pacemaker controlling sleep onset, an acute effect of evening light on alertness or both. Additional studies designed to measure circadian phase physiologically are required to address this. In either case, the results support a hypothesis that exposure to artificial light after sunset can delay sleep onset and reduce sleep duration. Such effects can be expected to increase as access to electricity on Tanna Island expands. Historical writings have been taken to suggest that in pre-industrial western European populations, nocturnal sleep habitually occurred in two bouts – so-called ‘first and second sleep’ – separated by an hour or more of midnight waking. Segmented patterns suggestive of ‘first and second sleep’ have been observed in a small-scale agricultural society in Madagascar. In the present study, 14% of the 519 recorded nights exhibited a bout of nocturnal waking that was sufficient for the state detection algorithm to score two separate bouts of sleep. However, this pattern does not appear to ft with the concept of ‘first and second sleep’ as a reliable sleep phenotype. About 50% of the subjects showed evidence of split sleep, and among those that did, the pattern was sporadic, with split sleep typically evident on only one of up to eight nights of recording. Splits also occurred at variable times of night. Nocturnal sleep in the indigenous residents of Tanna Island is therefore best described as monophasic, with occasional opportunistic daytime naps. An increase in the number of sleepers sharing the same sleep surface or room can reduce sleep efficiency or duration. Te number of co-sleepers was slightly greater in the non-electric communities,raspberry container size but sleep duration and efficiency were lower in the electric communities. Also, there was no association between the number of co-sleepers and sleep duration and efficiency in either community.

In addition, residents of Tanna commonly sleep on hard surfaces, which would not transfer movement between adjacent co-sleepers. For these reasons, it is unlikely that any differences in the number of co-sleepers would account for differences in sleep duration and efficiency between communities.Although the range of ages was limited in the present sample , a negative relationship with age and sleep efficiency did emerge, but only in non-electric communities. In communities with an electric grid, the lack of a relationship indicated that young adults also exhibited lower sleep efficiency, something that was not apparent in the young adults of the non-electric communities. A non-electric Haitian population showing lower sleep efficiency in younger adults suggests lifestyle factors, such as, increased childcare responsibilities, nocturnal household duties, or engagement in social activities that mask physiological age differences, all of which may be more easily facilitated in Tanna’s electric communities. On Tanna Island, males are often up late drinking kava, which is an important custom in Vanuatu, as the nakamal, where kava is consumed, is an important gathering place for older men to pass along knowledge and advice to young males in the village. Kava is mildly sedating, but males and non-breastfeeding females pooled separately across communities did not exhibit difference in sleep duration or efficiency, suggesting minimal effect of Kava consumption on sleep. We do not have sufficient information to be able to separate and compare sleep on nights with and without Kava consumption, and any effects might not be detectable by actigraphy. A limitation of this study is that data were collected only in April and May, during the transition from summer to winter, when daylength averages ~11.5 h. Seasonal variation in sleep timing has been reported in traditional hunter-gatherer societies, with daily wake-up time in one study population tracking seasonal changes in the time of the daily minimum of environmental temperature more closely than changes in the time of sunrise. In the present study, temperature recorded in representative sleeping huts showed a daily minimum that occurred on average 26min after sunrise, in both the coastal and the inland villages. Wake onsets on average were closer to sunrise than to the ambient temperature minimum in both groups. Ambient temperature may be less significant for sleep timing on Tanna Island because nights are milder, the daily temperature range is modest , and the transition from decreasing to increasing temperature is gradual.

Also, temperature is mild throughout the year and seasonal variation is modest. In this respect, the natural sleep environment on Tanna Island may be more similar to modern built sleep environments, in which temperature changes from day to night are minimized. If ambient temperature plays a role in sleep timing, then we would not expect to see a large effect of seasonal temperature changes on sleep parameters in residents of Tanna Island. The results of this study indicate that sleep measured by actigraphy in the small-scale traditional society on Tanna Island can be differentiated from sleep in western industrialized samples by relatively long duration and low efficiency. Availability of on-demand electric light appears to have a detectable effect on nocturnal sleep onset and duration, but this effect is likely mitigated by exposure to natural light throughout the day. Actigraphy studies of indigenous Ni-Vanuatu living in industrialized population centers elsewhere in Vanuatu may provide further insight into how lifestyle and industrialization shape sleep.Decades of both theoretical and empirical research based on the polygyny threshold model have suggested that polygyny should be more common and more pronounced in populations in which males differ substantially in resource control. In humans, this will be in socio-cultural contexts where wealth is held predominately by men, and where there is high inequality in its distribution. Historical and cross-cultural records, however, suggest that polygyny became less common as relatively egalitarian horticultural production systems transitioned into agricultural production systems, in spite of the fact that agriculture is characterized by both a greater importance of material wealth in the production process and greater levels of material wealth inequality than horticulture. This is the polygyny paradox. Existing hypotheses for the rise of monogamy with historic agricultural populations invoke the increasing importance of rival1 material wealth among agriculturalists, inheritance rules in conjunction with paternity certainty, male power relations, declines in female contributions to production, pathogen risk and punishment and cultural group selection via the imposition of norms.

Since human behavioural variation is often determined by many underlying factors, there are likely to be complementary effects among the potential causes identified in these hypotheses. Specifically, there should be coevolutionary interactions between the individual-level, economic- and fitness-based explanations for the rise of monogamy advanced here, and the cultural evolutionary explanations provided by Henrich et al. and Bauch & McElreath. Our results show how individual fitness maximization can explain the de novo origins of predominant monogamy within highly unequal populations. Should monogamy have group-level fitness benefits as suggested by Henrich et al., its emergence in specific groups via the mechanism we propose would provide the source populations for cultural group selection dynamics to propagate monogamy to other populations. Explanations for the rise of monogamy in agricultural societies in the spirit of Alexander and Henrich et al. develop the idea that powerful leaders might have imposed monogamy on the masses because such a marriage norm leads to greater in-group male–male cooperation,raspberry plant container improving the success of the group in inter-group contests. The economically grounded explanation for the rise of monogamy that we present here is not necessarily in competition with such theories. Our model, however, establishes that basic changes in the structuring of wealth inequality coinciding with the rise of class-based societies would have made monogamy adaptive at the individual level in a large fraction of the population—greatly increasing the scope for hypotheses advancing hierarchical imposition or even frequency dependent social transmission of norms for monogamy. The present analysis builds on work recognizing the importance of inherited wealth in structuring family relationships. To this existing literature, we introduce a new individual-level, cross-cultural dataset of wealth, marriage and reproductive outcomes, numbering 11 813 records from 29 human populations, including hunter –gathers, horticulturalists, agropastoralists and agriculturalists. Our dataset is unusual in both its scope and in the availability of individual-level information, rather than qualitative societal summaries. While not without its limitations—discussed in more detail throughout—it captures the core features of the polygyny paradox. Following Oh et al. , we develop a model of the equilibrium fraction of women married polygynously in a population where the extent of polygyny is determined by the fitness maximizing choices of both men and women. In contrast to the standard polygyny threshold model, which is a one-sided mate choice model that allows only for female choice, we develop a mutual or two-sided model. In this model, male choice refers not to selecting particular females on the basis of their quality , but rather to the male’s choice of the number of wives that will maximize his fitness. A male’s demand for wives depends on his level of wealth and the costs of mating investment, and can be more than, less than or equal to the total number of women who would choose to marry him. Mutual mate choice is rare in nature, but the conditions for it are met in species in which biparental care is important for the survival of offspring, as is typically true of humans.

From our theoretical model, we identify two conditions that jointly can lead to a decrease in the population-level frequency of polygyny in highly unequal agricultural populations: in these highly stratified economies, the fraction of men with sufficient wealth to make polygynous marriage an attractive option for them and their potential partners is low relative to other subsistence systems, and decreasing marginal fitness returns to increasing number of wives above and beyond the fitness costs of sharing a husband’s wealth sharply limit the number of wives acquired by exceptionally wealthy individuals. We use our empirical data to demonstrate that the transition to agriculture is associated with both of these factors identified as drivers of monogamy.The Standard Cross-Cultural Sample illustrates that the frequency of polygyny is relatively high in horticultural and pastoral populations, and low in agricultural populations. These findings are robust to use of quantitative or qualitative descriptors of polygyny. The third panel presents our estimates of the extent of material wealth inequality among males in the four production systems. Theoretical models of mating systems predict that polygyny should be positively associated with inequality in male resources, and more specifically with what Murdock terms ‘movable property or wealth which can be accumulated in quantity by men’. These forms of rival material wealth are, as we have just seen, more unequally held in horticultural economies than among foragers, which is consistent with the greater extent of polygyny in the former.Oh et al. show that inequality in reproductively important, non-rival forms of wealth—network ties, genes conferring adaptive phenotypes or acquired knowledge, for example— can also be a strong driver of polygyny, contributing to the explanation of substantial levels of polygyny in some societies with little rival wealth inequality. Indeed, there is empirical evidence that non-rival forms of wealth are associated with polygynyous marriage in some foraging and horticultural populations. While the polygyny threshold model has been effective in predicting the distribution of polygynous males within populations , the reduced level of polygyny in agricultural populations typically characterized by greater inequality poses a serious challenge to existing models of mating and marriage.To address this challenge, we build a comparative database of individual-level wealth, marriage and reproductive success measures in 29 diverse populations distributed over a wide geographical range . Table 1 provides population-specific background data. In order to use all cohorts of the adult male populations, relevant measures—wives and wealth proxies—are age adjusted in a Bayesian framework to represent their predicted values at age 60. This method of age adjustment assumes that the additional acquisition of wives and wealth from the time of censor to the age of 60 are unmeasured positive random variables, with mean values governed by the remaining time for acquisition and the age-specific acquisition rate trajectories inferred from the population cross sections . Our polygyny measures reflect the per cent of women who will ever be married to a man who marries more than once—in other words, in contrast to the data in figure 1a, we consider sequential marriage as a form of polygyny since the offspring of each mother are rival claimants to a father’s property.

A recent review of coverage approaches can be found in Galceran and Carreras

For 186 of the 2320 species examined here, the cross-validated MAE produced by the phenological model was identical to that estimated using the collection dates of the specimens alone . Although these species were retained for use in PhenoForecaster, it should be noted that no climate data may be entered for these species, and the resulting predictions of flowering time consist only of a constant value reflecting an estimate of the mean observed flowering date for that species, which is not influenced by local climate conditions. Additional species will be added and models will be updated as new data or superior modeling techniques become available.Although many studies have examined patterns of phenological variation in response to local climate, few tools exist for the prediction of phenological timing under novel climate conditions. PhenoForecaster provides a free, quick, and easy-to-use software package that allows researchers of any background to quickly predict the mean flowering date of angiosperm species under novel annual conditions, or at locations where the phenology of that species has not previously been observed. Its intuitive user interface and compatibility with existing spatial climate estimation packages such as ClimateNA make phenological prediction easy to accomplish by researchers of any background without the need for extensive training or familiarity with phenoclimate modeling. It should be noted, however, that the accuracy of predictions by PhenoForecaster is variable and depends highly on the species selected for prediction. The expected accuracy of PhenoForecaster output, as reflected by the MAE value for that species, should be kept in mind when dealing with predicted MFD values generated by PhenoForecaster. Furthermore,blueberry plant size these models do not account for potential heterogeneity of phenological responsiveness among populations of a given species, but instead represent mean phenological responsiveness across all available specimens for each species.

These data were also based on models trained using phenological observations throughout North America only, and using derived estimates of local climate condition produced using ClimateNA; these estimates may exhibit some differences from ground-based observations of these climate parameters, or from estimates of these climate parameters derived using different methods. Thus, predictions of the phenology of these species outside of North America, or based on different sources of climate data, should be treated with caution. In addition, it should be remembered that PhenoForecaster models the timing of MFD only, and that the relationship of MFD to other phenophases, such as leaf-out, date of first flower, or date of last flower, may be highly variable among species and across climate gradients. These predictions should therefore be treated as dates on which the individuals of a given species are likely to be in flower where they have experienced a particular suite of climatic conditions, rather than as the onset or termination date of any specific phenophase. Where possible, we also recommend cross-checking predicted MFD values generated by PhenoForecaster against observed MFD values for that species, particularly when evaluating the phenology of a species under conditions that are outside of its historical range limits. Nevertheless, PhenoForecaster represents a freely available and powerful tool that allows any researcher to conduct rapid predictions of phenological timing under past, projected, or otherwise novel climate conditions.Many agricultural robots have been developed to perform precision farming operations and replace or augment humans in certain tasks. These robots come in two main types: I) self-propelled mobile robots, and II) robotic “smart” implements that are carried by a vehicle. Type-I robots span wide ranges of sizes and designs. Conventional agricultural self-propelled machines such as tractors, sprayers, and combine harvesters have been “robotized” over the last decade through the introduction of GPS/GNSS auto-guidance systems. These machines are commercially available today and constitute the large majority of “agricultural robots”.

They can drive autonomously in parallel rows inside fields while a human operator supervises and performs cultivation-related tasks; turn autonomously at field headlands to enter the next row; and coordinate their operations. Autonomous cabinless general purpose ‘tractor robots’ were recently introduced by several companies that are compatible with standard cultivation implements. These larger robots are designed primarily for arable farming related operations that require higher power and throughput, such as ploughing, multi-row seeding, fertilizing, and spraying, harvesting and transporting. A large number of smaller type-I special purpose mobile robots have also been introduced for lower-power applications such as scouting and weeding of a smaller number of rows at a time. Most of these robots are research prototypes introduced by various research groups. A few commercial or near-commercial mobile robots have emerged in applications like container handling in nurseries and seeding , respectively. Small robots like Xaver are envisioned to operate in teams and are an example of a proposed paradigm shift in the agricultural machinery industry, which is to utilize teams of small lightweight robots to replace large and heavy machines, primarily to reduce soil compaction.Recent review articles have discussed some of the opportunities and challenges for agricultural robots and analyzed their functional sub-systems ; summarized reported research grouped by application type and suggested performance measures for evaluation ; and presented a large number of examples of applications of robotics in the agricultural and forestry domains and highlighted existing challenges . The goals of this article are to: 1) highlight the distinctive issues, requirements and challenges that operating in agricultural production environments imposes on the navigation, sensing and actuation functions of agricultural robots; 2) present existing approaches for implementing these functions on agricultural robots and their relationships with methods from other areas such as field or service robotics; 3) identify limitations of these approaches and discuss possible future directions for overcoming them. The rest of the article is organized as follows.

The next section discusses autonomous navigation , as it is the cornerstone capability for many agricultural robotics tasks. Afterwards, sensing relating to crop and growing environment is discussed, where the focus is on assessing information about the crop and its environment in order to act upon it. Finally, interaction with the crop and its environment is discussed, followed by summary and conclusions. The operation computes a complete spatial coverage of the field with geometric primitives that are compatible with and sufficient for the task, and optimal in some sense. Headland space for maneuvering must also be generated. Agricultural fields can have complex, non-convex shapes, with non-cultivated pieces of land inside them. Fields of complex geometry should not be traversed with a single orientation; the efficiency would be too low because of excessive turning. Also,plant raspberry in container fields are not necessarily polygonal, they may have curved boundaries and may not be flat. Additionally, most agricultural machines are nonholonomic and may carry a trailer/implement, which makes computing turning cost between swaths non trivial . Finally, agricultural fields are not always flat and field traversal must take into account slope and vehicle stability and constraints such as soil erosion and compaction.Computing a complete spatial coverage of a field with geometric primitives is in principle equivalent to solving an exact cellular decomposition problem .Choset and Pignon, developed the Boustrophedon cellular decomposition . This approach splits the area into polygonal cells that can be covered exactly by linear back-and-forth motions. Since crops are planted in rows, this approach has been adopted by most researchers. A common approach is to split complex fields into simpler convex sub-fields via a line sweeping method, and compute the optimal driving direction and headland arrangement for each sub-field using an appropriate cost function that encodes vehicle maneuvering in obstacle-free headland space . This approach has been extended for 3D terrain .Existing approaches assume that headland space is free of obstacles and block rows are traversed consecutively, i.e., there is no row-skipping. These are simplifying assumption, as it has been shown that proper row sequences reduce total turning time substantially . However, dropping this assumption would require solving a routing optimization problem inside the loop that iterates through driving orientations, and many maneuvering/turning motion planning problems inside each route optimization; this would be very expensive computationally. Furthermore, all algorithms use a swath of fixed width, implicitly assuming that the field will be covered by one machine, or many machines with the same operating width. Relaxing this assumption has not been pursued, but the problem would become much more complicated. Planning could also be extended to non-straight driving patterns using nonlinear boustrophedon decompositions based on Morse functions , with appropriate agronomic, cultivation and machine constraints. Finally, as pointed out by Black more , row cultivation was historically established because it is easier to achieve with animals and simple machines. Crops do better when each plant has equal access to light, water and nutrients. Small robots could grow crops in grid patterns with equal space all around by following arbitrary driving patterns that may be optimal for the cropping system and the terrain. Hence the boustrophedon assumption could be relaxed and approximate cellular decomposition could be used to compute optimal driving patterns, where field shape is approximated by a fine grid of square or hexagonal cells. This approach has received very little attention, as field spatial planning has targeted existing large machines. An example of early work in this direction combined route planning and motion planning, with appropriate agronomic, cultivation and machine constraints .

The basic version of route planning computes an optimal traversal sequence for the field rows that cover the field, for a single auto-guided machine with no capacity constraints. This is applicable to operations in arable land, orchards and greenhouses that do not involve material transfer or, when they do, the quantities involved are smaller than the machine’s tank or storage space; hence the machine’s limited storage capacity does not affect the solution. For operations where the machine must apply or gather material in the field the maximum number of rows it can cover is restricted by its capacity; the same applies to fuel. Hence, route planning with capacity constraints is a more complicated version of the problem. When many machines operate in the same field there are two classes of operations which have different characteristics. The first class is when machines are independent of each other, i.e., they do not share any resources. In such cases, coordinated route planning is straightforward because the machines can simply work on different swaths or sub-fields of the field; possible crossings of their paths at the headlands and potential collisions can be resolved during task execution. The second class is cooperative field operations, also known as in-field logistics, which are executed by one or more primary unit/s performing the main task and one or more service unit/s supporting it/them. For example, in a harvesting operation a self-propelled harvester may be supported by transport wagons used for out-of-the field removal of harvested grain . Similarly, in fertilizing or spraying operations the auto-guided spreader/sprayer may be supported by transport robots carrying the fertilizer/sprayer for the refilling of the application unit. Agricultural tasks are dynamic and stochastic in nature. The major issues with off-line route planning are that it breaks down in case of unexpected events during operations, and it can only be performed if the “demand” of each row is known exactly in advance. For example, if a sprayer’s flow rate is constant or the crop yield is known in advance, the quantity of chemical or harvest yield of each field row can be pre-computed and optimal routing can be determined. However, yield maps are either not available before harvest or their predicted estimates based on sampling or historic data contain uncertainty. Also, robotic precision spraying and fertilizing operations are often performed “on-the go” using sensors, rather than relying on a pre-existing application map. Hence, information is often revealed in a dynamic manner during the execution of the task. Vehicle routing for agricultural vehicles is based on approaches from operations research and transportation science. Optimal row traversal for a single or multiple independent auto-guided vehicles has been modeled and solved as a Vehicle Routing Problem . This methodology was conceptually extended to include multiple identical collaborating capacity-limited machines with time-window constraints, and to nonidentical vehicles . A review of similar problems in transportation science is given in . The problem of visiting a set of known, pre-defined field locations to take measurements or samples is not an area coverage problem, and was recently modeled as an orienteering problem for non-collaborating robots, and as VRP with time windows for capacitated cooperating vehicles .

Young chestnut branches have a particularly channeled structure

Almond root stocks have been shown to alter root, shoot, trunk, and fruit development, probably by affecting the allocation of carbon assimilates between these tissues. Khadivi-Khub and Anjam evaluated the Iranian cultivar ‘Rabiee’ grown on P. scopariaand ‘Estahban’ root stock under normal and rainfed conditions. They reported significant differences in tree height, trunk diameter, annual growth, and internode length, observing reduced scion growth when grafted on P. scoparia root stock. P. scoparia, suggesting potential as a dwarfing root stock. Parvaneh et al. evaluated three Iranian cultivars on bitter almond, sweet almond, and peach root stocks and found that cultivars grafted on peach had greater vegetative growth, while scions grown on both bitter and sweet almonds had reduced growth, resulting in smaller trees. The magnitude of the effect varied with cultivar. In a regional root stock trial at California State University, Fresno, significant differences among root stocks were found in canopy growth, tree height, and tree circumference. Almonds grafted on peach root stock had larger scion diameters than on almond root stocks. Preliminary results from a vigor study showed that trunk diameter of the scion cultivar depends on the scion-root stock interaction. The root stock effect differed depending on the cultivar grafted and scion vigor itself. Lordan et al. studied the performance of two Spanish almond cultivars, ‘Marinada’ and ‘Vairo’, grafted onto different root stock genotypes and reporting strong root stock effects on vigor, bloom, and ripening dates, yield, and kernel weight. The effect of root stock on tree architecture is less clear. Rootstock effects on shoot length and shoot diameter have been reported,container growing raspberries but the magnitude of the effect varied as a function of specific scion-root stock combinations.

Similarly, the scion can influence root structure, primarily by altering auxin and cytokinin responses. This suggests the regulatory feedback between the root stock and scion ultimately modulates final tree architecture. The underlying molecular mechanisms of these interactions remains unknown. Studies of the effect of root stock on pecan scion vigor have demonstrated that common pecan root stocks vary by geographic region and have a diverse effect on scion growth. Before introducing clonal root stocks, open-pollinated seed stocks widely used for the vegetative propagation of commercial pecan cultivars had different growth responses. Grauke and Pratt evaluated bud growth of three pecan cultivars on seven open-pollinated seed stocks including ‘Curtis’, ‘Burkett’, ‘Elliott’, ‘Moore’, ‘Riverside’, ‘Apache’, and ‘Sioux’. They reported that scion growth was significantly influenced by root stock, with bud growth of ‘Candy’ on ‘Elliot’, and ‘Curtis’ root stocks were more than ‘Sioux’, ‘Riverside’, ‘Apache’, and ‘Burkett’ root stocks. Liu et al. studied the grafting-responsive MicroRNAs which are involved in growth regulation of grafted pecan and identified some miRNAs that regulate grafted pecan by regulating inorganic phosphate acquisition, auxin transport, and cell activity. The root stock effect on vigor of other nut trees has been less studied. In hazelnut, new root stocks have produced superior vigor compared to own-rooted varieties. This is an important improvement when trees are trained to a trunk, and not grown as bushes with many stems. Graft success depends on the root stock-scion physiological compatibility and the proper alignment of tissues in the graft union. Graft incompatibility is a complex physiological process defined by the adjustment of the metabolisms of the cultivar– root stock combinations, growth conditions, the presence or absence of viruses, environmental conditions, the nutritional status of trees, and as other stresses. Graft incompatibility can be detected by a variety of symptoms including poor graft success, yellow-colored leaves, slow vegetative growth, drying of the scion, a generally diseased appearance, symptoms of water stress, overgrowth in the graft area, thicker bark tissues of scion, and excessive sprouting on the root stock .

In pistachio, P. terebinthus, P. atlantica, P. integerrima, P. vera and their interspecific hybrids are commonly used root stocks. P. terebinthus is more difficult to bud than P. atlantica or P. integerrima due to scion-root stock incompatibility problems. Although root stock-scion incompatibility is not a serious problem in pistachio production, some evidence of incompatibility between P. veraas a scion and UCB1 as a root stock was observed in the late 1980s in the USA. This incompatibility appeared to be related to a single paternal P. integerrima tree used to produce the first UCB1 seedlings at the University of California, Berkeley. There have been fewer reports of root stock scion incompatibility since removal of this paternal tree. When facing root stock-scion incompatibility problems in pistachio it is worth testing different individuals within a single species to find a compatible genotype. The success of walnut grafting mainly depends on several factors such as root stock, scion, grafting methods, and environmental conditions. The specific graft incompatibility between different Juglans species has not been reported. Nevertheless, some literatures refer to black line disease as a delayed graft incompatibility in walnuts. California black walnut and its hybrids are considered as interesting root stocks for Persian walnut specially in California due to high vigor, resistance to soil-borne pests, and tolerance to saline and saturated soil. However, if Persian walnut was grafted on California black walnut and its hybrids and the tree was infected with CLRV virus, the symptoms of black line disease would appear, which is similar to a graft incompatibility. Therefore, in regions where there is a possibility of infection with the CLRV virus, Persian walnut is a more suitable root stock that can be used to avoid black line disease. Andrews and Marquez reported that black line disease has a long-delayed incompatibility where a CLRV virus migrates to a graft union. In almond, graft incompatibility appears to be genetically dependent. For example, ‘Nonpareil’ shows distinct graft-incompatibility on plum root stocks while the closely related ‘Carmel’ cultivar does not. Graft-incompatibilities can produce both slow general tree deterioration over time and distinct localized deterioration such as the stem-pitting decline seen on almond-Myrobalan plum combinations.

These more localized types of graft-incompatibility can often be observed as a weakness and occasional breakage at the graft-scion union. Because this often occurs at a critical time, when the tree is coming into bearing, several studies have pursued earlier physiological and molecular predictors of graft-compatibility as an aid to both breeding and orchard management. These studies generally involve anatomical, physiological, or molecular aspects of compatible graft union formation such as the similarities/differences in scion vs. root stock vascular size and configuration. Related studies have identified several molecular candidates that may contribute to compatible graft formation, however,raspberries in containers the specific cause and effect relationships remain vague. Studies have identified several metabolic pathways, including the phenylpropanoid pathway, cell wall biosynthesis, oxidative stress, and auxin signaling, that appear to be associated with graft-incompatibility, supporting the complex genetic control commonly encountered when breeding for this trait. Japanese and Chinese chestnuts are used in chestnut root stock breeding programs due to their root-rot resistance. The potential use of hybrid chestnut cultivars also has been evaluated; while incompatibility has been observed in the hybrids. Tokar and Kovalovsky grafted Chinese, European, and Chinese × Japanese hybrid chestnut cultivars onto European chestnut root stocks. The least successful grafting combinations were the Chinese × Japanese hybrid on European root stocks. Viéitez and Viéitez, used Chinese and European chestnuts for European, Chinese, and European × Chinese chestnut hybrid scions. The least successful grafting combinations were the Chinese root stocks with European chestnut cultivars.Soylu suggested that scions and root stocks of the same species should have better graft compatibility, but genetic intraspecies graft incompatibility was reported in Chinese and European chestnuts. Although graft compatibility in chestnut may be mostly controlled by genetic factors, graft success can also be affected by environmental factors, stress, and their interactions with genotype. Oraguzie et al. suggested that growing the root stock and the scion plant under the same environmental conditions would produce better graft compatibility. Oraguzie et al. divided graft incompatibility into two groups, early and late. Early graft incompatibility can be seen in the first two years and late incompatibility in 5 to 7 years. Chestnut mosaic virus can also induce graft incompatibility. The first hypothesis was suggested by Santamour et al.. They identified four different cambial peroxidase isozymes patterns in ten chestnut genotypes. They found that C. dentata, C. alnifolia, C. ashei, C. ozarkensis, and C. pumila species have A cambial peroxidase isozymes, C. crenata and C. seguinii have B pattern, C. sativa has A, B, and AB isozymes, C. henryii has A and B and C. mollissima has A, AB, B, and BC isozymes. Grafting plants with different isoenzyme bands could lead to graft incompatibility. Santamour tested his hypothesis with 200 Chinese chestnut seedlings. If the scion and the root stock belonged to the same cambial peroxidase isozymes group, the cambium layer in the graft area united and cambial continuity occurred. If the scion and the root stock cambial peroxidase isozymes groups were different, cambial continuity was interrupted. Thus, he suggested that cambial peroxidase isozymes groups could be used to predict graft incompatibility in Chinese chestnut. However, this hypothesis was not confirmed in subsequent study. The other hypothesis of graft incompatibility in Chinese chestnut is a mismatch of phloem fiber bundles. A very important aspect of this anatomical structure is the presence of a fiber bundle in four or more places in the branch. When the seedlings are 2–3 years old, phloem fiber bundles can be better distinguished. This situation should be considered when grafting, as the cambium of the root stock and scion may not combine uniformly. Given the importance of early detection of graft incompatibility, it is important to find specific markers for prediction in different root stock-scion combinations. Many studies have addressed strategies for compatibility detection such as phenolic marker identification and peroxidase isozyme studies.

Phenolic compounds, whose biosynthesis is triggered by wounding and infections, are produced and accumulated during the callusing phase. This suggests that quantitative and qualitative differences in phenolic patterns between scion and root stock may predict graft union dysfunctions and could be potential markers of graft incompatibility. Research at the University of Torino Chestnut R&D Center, demonstrated different chemical markers: six phenolic acids, five flavonols, two catechins, and two tannins. Chromatographic methods were used to identify and quantify the main bioactive compounds, benzoic acids, binnamic acids, batechins, flavonols, and tannins and obtained specific phytochemical profiles. Benzoic acids , catechins , and tannins were used to establish specific profiles for distinguishing compatible and incompatible chestnut scion-root stock combinations. Another promising technique is the analysis of peroxidase isozyme profiles of root stocks and scions. It appears peroxidases play an important role in grafting, as these enzymes are involved in lignin formation and lignin–carbohydrate bonding. Differences in peroxidase isozymes in root stock and scion graft performance have been reported in Chinese chestnut and peach–plum combinations. Other strategies for evaluating root stock–scion compatibility include describing the phenylalanine ammonia-lyase transcriptomic-level and phenotypic evaluation.Another important trait in root stock selection is suckering. Suckers not only divert water and nutrients from the main trunk, but also increase orchard management costs incurred in removing them. Suckering is an important issue in hazelnut cultivation, requiring four to five herbicide sprays per year in commercial orchards and occasional hand-removal in winter. This situation could be improved by use of non-suckering root stocks. Currently, three types of hazelnut root stocks are in use: C. colurna seedlings, C. avellana seedlings, and two clonal selections from open pollinated C. colurna: ‘Dundee’ and ‘Newberg’. A hazelnut root stock trial in IRTA-Mas Bové, Spain in 1989 led to selection of a clonal C. avellana root stock , which is a seedling of ‘Tonda Bianca’. One of the first European hazelnut root stock trials was conducted in Nebrosi, Sicilia in 1970 to compare self-rooted trees grafted on C. avellana root stock . After 12 years of evaluation, self-rooted trees showed better vegetative and productive behavior than grafted ones. Experience with C. colurna in the U.S.A. has demonstrated that members of this species are more drought tolerant and cold hardy than C. avellana cultivars. The C. colurna was non-suckering, deeply-rooted, and graft-compatible with all C. avellana cultivars and Corylus species, suggesting its potential use as a root stock. Due to differences in bark color and texture, the union between the Turkish and European hazelnut is readily evident. However, the Turkish hazelnut is difficult to propagate and its seedlings often require two additional years before reaching sufficient size for grafting.

Several aspects of the genetics of resistance to B. cinerea are unclear in strawberry

To date, B. cinerea biocontrol products are mostly Bacillus subtilis-based, but their use in commercial strawberry production is limited because of their insufficient applicability in the field or supply chain . Nevertheless, there is social and scientific interest in using biocontrol against B. cinerea as an alternative to chemical pesticides. Isolates of Colletotrichum gloeosporioides, Epicoccum purpurascens, Gliocladum roseum, Penicillium sp., Trichoderma sp. have displayed high efficiency in controlling B. cinerea and were reported to reduce grey mould incidence on strawberry stamens by 79%–93% and on fruit by 48%–76% . Interestingly, in some experiments, the efficiency of biocontrol by these organisms exceeded the efficacy of control via the fungicide captan. Similar results were obtained for other microbes, such as the yeasts A. pullulans and Candida intermedia , the filamentous ascomycete Ulocladium atrum , or the bacterium Bacillus amyloliquefaciens . Biocontrol via microbes can work via different modes of action, including competition for nutrients, secretion of antibiotic compounds and induction of host defence mechanisms like the up-regulation of chitinase and peroxidase activity . Because biocontrol of B. cinerea relies on a variety of mechanisms, the most significant effects are observed when different organisms are applied in combination . As alternative to applying living microbes, use of extracts or volatiles derived from biocontrol microbes has been suggested . Use of non-synthetic antifungal substances, like phenol-rich olive oil mill wastewater,growing blackberries in containers has also been reported to control B. cinerea growth in vitro and on strawberries . However, these approaches are not implemented on a commercial scale due to high costs compared to the conventional B. cinerea control.

It is common practice to handpick strawberries and place them into clamshells in the field, in order to reduce wounding and bruising of the fruit. Rapid and constant cooling of strawberries at temperatures below 2.5 ºC is another critical strategy to reduce or inhibit reactivation of B. cinerea quiescent infections . Often, strawberries are also stored in modified atmospheres, which are generally low in oxygen and high in carbon dioxide to slow down metabolic processes, senescence and fungal decay . Relative humidity during storage is usually kept around 85%–90% to prevent dehydration of fruit, but limit fungal growth . Novel post harvest treatments of strawberries have been suggested to prevent B. cinerea infections during storage. Examples are edible fruit coatings of chitosan, silk fibroin or methylcellulose that prevent water loss and can include antifungal compounds . MeJA treatment to induce fruit defence mechanisms , ultraviolet and visual light treatment , enrichment of storage atmosphere with chlorine or ozone , and soft mechanical stimulation have also been tested as alternative treatments. Most of these approaches are still in an experimental stage or not yet adaptable to commercial settings.Significant phenotypic variation of incidence or severity of grey mould has been reported; however, F. x ananassa genotypes appear to be universally susceptible and complete resistance has not been observed . Substantial genotypic variation has not been documented and the heritability of resistance to B. cinerea is unknown. Mild phenotypic differences in fruit resistance levels reported in various post harvest studies indicate that genetic variation for resistance may be limited and that its heritability is low. A contributing factor is the intrinsic characteristics of the pathogen, its broad host range, diverse ways of infection and necrotrophic lifestyle, which explain the absence of a gene-for-gene resistance of strawberry to B. cinerea .

Therefore, breeding for escape and tolerance, which includes physiological and biochemical traits, is a more practical option . While limited in scale and scope, earlier studies strongly suggest that the incidence and progression of B. cinerea infections differ between cultivars with soft fruit and those with firm fruit . Hence, previously reported differences amongst cultivars could be the result of the pleiotropic effects of selection for increased fruit firmness and shelf life and the associated developmental and ripening changes, as opposed to direct genetic gains in innate resistance to the pathogen. As discussed, fruit firmness is an important trait associated with resistance to B. cinerea . The strawberry germplasm displays natural variation for fruit firmness and developing cultivars with firmer fruit is an important aim in breeding programmes . Changes in flower morphology could also enhance tolerance to B. cinerea. In strawberry, most B. cinerea infections in fruit appear to originate from primary infections of flowers or secondary infections caused by direct contact with infected flower parts . It was reported that removal of stamen and petals result in lower grey mould incidence . Faster abscission of flower parts, especially petals, has the potential to aid the escape of strawberries from B. cinerea infections . Similarly, plants with pistillate flowers have a lower grey mould incidence in fruit . B. cinerea growth inhibition in stamens is reported to vary within the strawberry germplasm, potentially due to differences in their biochemical composition . Similarly, antifungal compounds in fruit can prevent or limit B. cinerea infections. Several reports indicate that anthocyanin accumulation contributes to tolerance of strawberries to B. cinerea . Anthocyanins do not just improve tolerance to grey mould but also provide health benefits . Breeding for higher anthocyanin content in strawberries is possible and facilitated by existing variation in the germplasm . Inducing anthocyanin accumulation in flowers could also help to limit flower infections. As breeding for higher B. cinerea tolerance will be tedious and likely will not result in complete resistance, complementary approaches should be considered.

Currently, no genetically modified strawberry cultivars are commercially grown; however, several reports show great potential to improve tolerance to grey mould via trans- or cis-genesis. For example, the expression of chitinases or PGIPs from other organisms in strawberries can prevent or slow down fungal infections . Another potential transgenic approach is to increase fruit firmness by altering the expression or activity of pectin degrading enzymes, such as PL or PG . The existing natural variation of PL expression levels and activity in the cultivated strawberry germplasm could be used for cisgenic approaches. Increasing phenolic levels in strawberries by genetic modifications can also be explored as the transcription factor MYB10 was identified as a regulator of anthocyanin levels in strawberries ; Medina-Puche et al., 2014). Transgenic plants with ectopic over expression of MYB10 show elevated anthocyanin levels throughout the entire plant ; however, the resistance of these plants against B. cinerea have not been tested. In summary, these novel breeding approaches should be supported by integrative management strategies including horticultural and agronomic practices, and potentially biocontrol, to achieve maximum control of the disease.Plant gene editing may be the greatest innovation in plant breeding since the Green Revolution. It has already been used to make discoveries in plant biology and has a profound potential to create new crops with desirable characteristics. There are already exciting developments,square pot which show that gene editing may be able to live up to expectations and can be used to produce novel plant phenotypes that would improve agricultural production. Most authorities estimate that food production will have to double in the next 50 years to keep pace with population growth. The focus on global food security, however, is usually on starch-rich cereals and ignores or underestimates the vital importance of horticultural crops. These perishable commodities are often nutrient-dense with bioactive phytochemicals, the consumption of which is needed for a healthy and thriving population. However, an uncomfortable fact is that in addition to losses that may result from disease, drought, extremes of temperature, and other environmental stresses experienced in the field, an additional 25–40%—an average of 33%—of all fruit and vegetables produced globally are never eaten after harvest. This estimate still does not illustrate the extreme losses that can occur in some developing countries, which may be as high as 75%. Current worldwide horticultural crop production is insufficient to meet human nutritional requirements, making post harvest loss and waste all the more unsustainable. Only recently has the need to reduce the loss of horticultural crops after harvest been given the attention it deserves. Although the causes of post harvest loss and waste are complicated, we suggest that technology-assisted breeding for new and improved fruit, vegetables, and ornamentals, compatible with supply chain constraints but delivered at peak quality to the consumer, could be an important part of the solution over the long-term. In this review, we examine the potential for gene editing to make a measurable and robust impact on post harvest waste and loss. Rather than a technical or critical assessment of methodologies or research areas, we focus on connecting the bio-physiology of post harvest produce, the needs of the produce industry, and the wealth of existing molecular research, to suggest a holistic yet straightforward approach to crop improvement. The main focus of the review is the discussion of genes that could influence the quality and shelf-life of produce. First, we examine the steps that are taken to extend shelf-life in the produce supply chain, and the impact of supply chain management on consumer-desired quality traits.

Then we briefly review the CRISPR–Cas9 method to emphasize the flexibility, ease, and power with which traits can be modified. Finally, we take a critical look at remaining barriers which must be overcome to make gene editing for post harvest traits technically and economically viable. This review serves both as an introduction to post harvest and gene editing and as a resource for researchers attempting to utilize the latter for the former.Post harvest waste and post harvest loss are sometimes used interchangeably, but this is incorrect. Post harvest loss is unintentional. It describes the incidental losses that result from events occurring from farm-to-table, such as physical damage, internal bruising, premature spoiling, and insect damage, among others. Produce loss is also described as quantitative because it is measurable. This does not imply that data is easily available, only that it can be assessed. Post harvest waste, in contrast, is intentional. It describes when produce is discarded because it does not meet buyer expectations, even though it is edible. Produce may be rejected by growers, distributors, processing companies, retailers, and consumers for failing to meet desired or established preferences. Produce waste is described as qualitative because it is difficult to measure and assess. Still, in the US, it is estimated that 7% of post harvest losses of fruit and vegetables occur on the farm, while more than twice that, i.e., 17% and 18% are wasted in consumer-facing businesses and in homes, respectively. Produce post harvest loss and waste threatens environmental sustainability, and is especially catastrophic when viewed in the light of the twin challenges of global climate change and increasing population growth. PLW means inefficient use of financial investments in horticulture and more critically, non-renewable natural resources. Technological measures to curb PLW, such as maintaining a cold-chain and use of plastic packaging, additionally have energy and carbon costs. Improving the shelf-life and quality attributes of post harvest crops by genetic modification or smart breeding could be among many solutions to lessen the severity of these problems.Produce must be kept alive from farm to table; however, the biological nature of horticultural produce is often in congruent with modern commercial supply chain operations. Produce and ornamentals are high in water content, and often metabolically active, which makes them highly perishable. This becomes a challenge given the number of food miles fruit, vegetables, and ornamentals can travel in the global supply chain . Modern post harvest supply chains may be separated spatially by thousands of miles, and temporally, by several months. Produce trucked and shipped from the field is often treated: cooled, washed, sorted, dipped, sprayed, or held at desirable temperatures and modified atmospheres to preserve “health”. The majority of produce from mid to large-scale operations may move through a byzantine system of processors, distributors, and trucking and shipping entities. Maintaining an unbroken cold-chain, adequate packing, and shipping are essential to preserving quality and shelf-life. . Produce, even after harvest, respires , transpires water, and, for the “climacteric fruits”, can emit high levels of ethylene, which can be accelerated at high temperatures. Optimizing storage and handling conditions requires managing these biological processes , which may differ for each produce-type or variety, and from how the preharvest environment influences biology at harvest and thereafter.

Coyote tobacco is capable of ripening copious amounts of seed

Nutmeat production. Even though regulated deficit irrigation consistently reduced applied water compared to the control , variation was high enough to prevent the regulated deficit irrigation from having a statistically significant effect on the gallons of irrigation water used to produce 1 pound of nutmeat . Statistical analysis for yield and irrigation water used per pound of nutmeat showed that both block and year effects were highly to very highly significant, presumably as a result of fixed block-to-block variability in the soils as well as the combined effects of year-to-year variation in weather conditions, especially during flowering, and the increasing yields over time. Nut quality. Over 5 years, we found only two statistically significant effects on nut quality under regulated deficit irrigation: a decrease in kernel weight and an increase in the percentage of severe shrivel. Average nut size was 1.18 grams in the regulated deficit irrigation treatment and 1.21 grams in the control . There was severe shrivel in 13.0% of nuts sampled from the regulated deficit irrigation treatment and 9.0% from the control . The non-significant effects measured were nut moisture; percentages of sealed sutures, doubled kernels, twin kernels, blanks, broken kernels, creases, slight shrivels, rupture calluses, gums, molds and stains; and damage by navel orange worm, ants and peach twig borer. For most of the quality factors measured, the effect of year, but not block,plant pot with drainage was also highly to very highly significant . Hull split. Previous research showed that regulated deficit irrigation can increase the rate of hull splitting , but in this study we observed no measurable differences in the duration or extent of hull split between treatments in any year .

Plant water deficit. The SWP values in both treatments were approximately equivalent before and after the regulated deficit irrigation period, but were much lower compared to the control during the hull-split period . This indicates that a well-defined and reproducible plant water deficit was achieved during hull split in the regulated deficit irrigation treatment. For much of the growing season , particularly around harvest time , SWP in the control was also lower than expected for almond trees with non-limited water . This effect may be attributable to a small deficit in water applied by the grower as a result of cutbacks in water availability.The orchard site used in this study presented several difficulties in implementing regulated deficit irrigation as a management technique, in particular the site’s relatively shallow and spatially variable soil with low water-holding capacity, and two comparatively dry years . Both of these factors might lead to an excessive and potentially damaging level of stress when irrigation is reduced, particularly just prior to harvest in almonds, when irrigation must be discontinued to allow for mechanized harvesting. However, using a simple, plant-based approach, consistent water savings of more than 5 inches or about 13% of applied water were achieved with no detectable effects on short- or medium term orchard productivity. When regulated deficit irrigation was compared to the control, there was an annual water savings of 0.4 acre-foot . Although no significant reductions in overall yield or gallons of irrigation water used per pound of nutmeat were observed in our study, significant reductions in yield have been documented in previous deficit experiments with almonds. The negative effects in those studies were not extreme, and the yield reductions were generally associated more strongly with water deficits imposed during post harvest than during hull split. In a 4-year study by Girona et al. , a statistically significant reduction in overall yield was associated with a 40% reduction in water application and a non-significant reduction in kernel dry mass.

In our study, the overall treatment difference in kernel dry mass of 2.5% was statistically significant but relatively minor. At this site, even though the grower annually applied what many consider full ETc, the SWP values indicated that the orchard trees were experiencing mild-to-moderate stress during much of the season, particularly around harvest. According to a previous study, mild to-moderate stress may not be unusual in commercial almond production . Therefore, it is difficult to determine how much water might be saved statewide if our recommendations for regulated deficit irrigation were widely adopted. Our plant-based strategy for regulated deficit irrigation is based on targeted stress levels at specific stages of crop development. If current grower practice already achieves this stress level, then the water savings shown in this study may not be realized. It will be important to further document current practices in terms of both ETc and SWP in order to have a more reliable estimate of the potential water savings from using regulated deficit irrigation in almond orchards. The water savings in our study might also be improved upon. Depending on winter rainfall and soil type, a plant based strategy might allow irrigation to be reduced for longer periods of the season in many almond-producing areas of the state. The contribution from rainfall is another important consideration; during this study there were 2 years of below-average rainfall, and the average annual contribution to crop consumptive water use from soil storage was only 11%. In less droughty years, or on soils with a higher water-holding capacity, water savings from plant-based regulated deficit irrigation might have been greater.Coyote tobacco.Each seed capsule contains more than one hundred seeds and a thrivmg plant may ripen hundreds of capsules. Diminutive seed size confers certain advantages. Tiny seeds incorporate into soil and are insulated from the scorching effects of fire. Tobacco plant foliage may serve as adispersal agent. With a swish of the summer wind, the small seeds spill from the capsules and many adhere to the plant’s sticky stems and leaves.

Dry, wind-blown or hand-tossed tobacco foliage inevitably travels with a cargo of seeds. During high water the small seeds may also raft downstream attached to tobacco fohage and stems associated with flotsam. Although coyote tobacco seeds may germinate the spring after they ripen, the seeds can also wait for optimum conditions, mamtaining their viability for many years—perhaps more than one hundred . Careful investigations reveal that the coyote tobacco seed bank is finely tuned to disturbance, especially by fire. Maximal seed germination is triggered both by an organic compound found in smoke and by the effects of fire, which remove potential competitors and their germination-inhibiting chemicals. Most large stands of coyote tobacco result from range and woodland fires. In the absence of fire, other triggers influence seed germination. Soil disturbance, the removal of plant debris, and competitive vegetation result in a hedge-betting strategy: i.e., the germination of a much smaller portion of the seed bank. Thus, without fire, coyote tobacco sometimes appears along graded roadsides, in washes, sand dunes, stock corrals, and in other disturbance-prone environments where the seeds persist . Coyote tobacco has a vast, mostly inland range that extends from Mexico north to Canada, and from the east slope of the Rocky Mountains to the rain shadowed lower east slopes of the Cascade Mountains of Washington and Oregon. Coyote tobacco is the only native species of tobacco found throughout much of the Great Basin. In CaUfomia its range extends west across the Cascade and the Sierra Nevada mountains to include much of the state TTie scattered populations of coyote tobacco along the Oregon-CaUfomia borderland nearly surroimd the vicinity of the Upper Klamath River and illustrate the wide diversity of environments to which the species is adapted. East of the Upper Klamath River in southeastern Oregon and northeastern CaUfomia,large plastic pots coyote tobacco behaves as it does throughout much of the basin and range country, populating burned rangeland and then disappearing a few years later. In the absence of fire, it appears primarily along graded roadsides, occasionally in dense stands that dwindle within a year or two . Not far to the southeast of the Upper Klamath River, the campground and roadsides of Lava Beds National Monument host a scattered crop of tobacco nearly every year . When a fire swept through the area in 1999, the number of plants increased dramatically and then declined to the occasional scattered individual . A short distance to the northwest of the Upper Klamath River, but well above it, coyote tobacco may be found on Siskiyou Summit, the highest point on Interstate 5. On this pass the vegetation is wind-scoured in winter, but the summer season is generally mild with occasional showers that tend to skirt the drier valleys on both sides. In the summer of 2003, coyote tobacco plants, like a line-up of hitchhikers, occupied both sides of the highway. With access to open habitat, and fertilized with trucker-supplied nitrogenous waste, the plants set tens of thousands of seeds among the roadside foam cups, wads of paper, and plastic bags . To the south and southwest of the study area, coyote tobacco is also found in the Shasta Valley, and in the vicinity of Mt. Shasta . The Klamath River is one of only three rivers that cut entirely through the Cascade-Sierra uplift.

Along the southern Oregon and northern Califomia borderland, the Cascade Range serves as a semi-permeable biogeograpic boundary between two biogeographic regions: the Great Basin Floristic Province and the California Floristic Province .The trough created by the Klamath River as it slices through the Cascade Mountains functions as a corridor connecting portions of the two provinces. The study area includes the landscapes adjacent to and above the Upper Klamath River where the river transitions from the Great Basin Floristic Province to the California Floristic Province. Throughout the Cascade Mountains portion of the Upper Klamath River, the ranges of species characteristic of each region interweave, influencing the kinds and abundance of plants available for cultural use. Since the seed banks of coyote tobacco are seldom manifest as actual plants in the landscape, it is difficult to assess the geographic range of the species on the basis of a single season of observations. For instance, an intensive one-season plant survey of the Irongate Reservoir vicinity in 1996 did not note coyote tobacco, although under the right circumstances the plants are quite common. The Upper Klamath River Project^ allowed participants many opportunities to access vast portions of the landscape for many consecutive growing seasons, enabling us to spot widely dispersed and infrequently occurring stands of wild tobacco. Within the study area, coyote tobacco is a regular summertime resident to the east, on the edge of the dry, Great Basin-like Butte Valley along the OregonCalifornia border. Most August mornings, a sharp eye spots a few small, dusty plants flowering just off the local dirt access roads. The plants respond well to the soil disturbance associated with machine-piled logging slash. One year, robust plants were confined to the vicmity of these piles; the next year, except for a hidden seed bank, they were as absent as the loggers. A dusty ranch with aged outbuildings, adjacent to the Upper Klamath River, is the locale for another small, semi-permanent population. Each year a few plants germinate in the cow- and ATV-churned soil. Single plants and small patches of plants appear and disappear along the rough roads that parallel the river along the Oregon-California border . On one occasion, when a culvert pipe was replaced upslope from Irongate Reservoir, hundreds of plants densely carpeted the area disturbed by the backboe; the next year there were only two plants. The following year there were none, and none have been observed at this location since. While recently burned areas and roadsides are recognized as common habitats for tobacco, riversides are seldom considered to be accommodating to the plants. However, where the Upper Klamath River cuts through the Cascades, coyote tobacco occasionally appears just below the winter-spring high-water line along the banks of the river. After the flood of 1997, a patch of just a few large plants grew up in a silted side channel not far above the 1-5 crossing. Large leaves from these flood-awakened plants were cheerfully harvested by local Shasta tribal member Mary Carpelan . One small gravel bar along the river seems to grow a crop of coyote tobacco year after year. The plants grow best between the river’s high-water mark and its summer low-water edge, holding to the places where occasional floods scour shade-producing riparian shrubs and trees from the bank.

The root stock buds well to all the common scion varieties

This chromatograph is reproduced in Figure 74 [Image could not be located] The use of, or combination of, other diagnostic aids may also prove valuable. Thus, Albach and Redman used the composition of flavones in citrus fruits as an aid in citrus classification, but these compounds might be a factor in bark identification as well. Dreyer used citrus fruit bitter principles for chemotaxonomy in the Rutaceae. Esen and Soost found that based on the occurrence or absence of browning in young shoot extracts, Citrus taxa can be classified into two phenotypes: browning and nonbrowning. The ability to cause browning in shoot extracts has been shown to be due to a single gene , which controls the production of a substrate of polyphenol oxidase. Esen and Soost suggest the technique might not only be useful as a genetic marker, but also as a taxonomic criterion, when used with other procedures. Another very helpful chemical technique to aid in citrus identification, particularly in distinguishing between nucellar and zygotic seedlings, is the use of isozymes. This procedure is based on the horizontal starch gel electrophoresis of heterozygous loci found in leaf extracts of the cultivars to be analyzed,cultivo frambuesa whether known cultivars or new hybrid progenies. Perhaps the first researchers to use the method as a practical approach in Citrus were Ueno , Ueno and Nishiura , and Ueno and Nishiura . Ueno and Nishiura were very successful in identifying hybrid and nucellar seedlings in progeny obtained from a breeding program extending over a 10 year period at the Fruit Tree Research Station at Okitsu, Japan. The progeny of the crosses were categorized by analysis of the leaf enzymes using peroxidase isozyme electrophoresis.

Ueno extended this technique to confirm the identity of Citrus species and varieties in the collection at the Okitsu Station. Ueno and Nishiura used the same procedure to study the graft hybrids, Kobayashi mikan , Kinkoji-unshiu , and Takagi-mikan . They were able to establish that the Kobayashi mikan and the Kinkoji unshiu were tree graft hybrids, but that the Takagi-mikan was not. Torres, Soost, and Diedenhofen were the first to report clearly the allozyme systems in Citrus. They reported 19 codominant identifiable alleles distributed among four loci controlling three enzymes. These three gene enzyme systems were glutamate oxaloacetate transaminase , phosphoglucose isomerase , and phosphoglucose mutase . Using this technique, Soost, Williams, and Torres were clearly able to distinguish between zygotic and nucellar five-month-old citrus seedlings. The seedlings were from a cross using King mandarin as the female parent and Parson’s Special mandarin as the pollen parent. Of the 128 seedlings obtained from the cross, and using the two genetically defined markers , they found 18 to be nucellar and 110 to be zygotic. Fortunately, all of these seedlings were planted in the orchard for further observations. The work of Soost and Torres extends and amplifies the work of Torres, Soost and Diedenhofen . Three additional gene-enzyme systems, malate dehydrogenase , hexose kinase and isocitrate dehydrogenase , were determined and used for possible identification of cultivars in the subgenus Eucitrus. Additional taxa were analyzed for glutamate oxaloacetate transaminase , phosphoglucose isomerase and phosphoglucose mutase . Examples of the use of the three latter enzyme systems are presented in the paper as well as considerable references to the numerous cultivars in the Citrus Variety Collection at the Citrus Research Center, Riverside.

Torres, Soost, and Mau-Lastovicka emphasize that isozymes of Citrus provide molecular tags to determine the genetic origin of citrus seedlings. A very large proportion of all possible seedlings, either from selfing or crossing, can now be distinguished with a great deal of certainty as to their genetic origin. Citrus leaf extracts were analyzed by starch gel electrophoresis for the previously mentioned enzymes of hexokinase , isocitrate dehydrogenase , leucine aminopeptidase , malate dehydrogenase and malic enzyme . The isozyme technique is wonderful for distinguishing between nucellar and zygotic seedlings, but for other uses it does have limitations. Sometimes it may accurately identify the genetic makeup of many cultivars, in others, not. However, in some cases it is extremely helpful in that although it cannot tell one of the parentage of a cultivar, it can tell one what it is not. Sibs out of the same cross cannot be identified from each other. Orange cultivars cannot be accurately identified, or with little confidence. Carrizo and Troyer citrange have not been definitely identified. Eureka, Lisbon, and Villafranca lemon strains cannot be distinguished, etc. Many of the various chemical tests are rather complicated and not only need special supplies and equipment, but also trained technicians. If such tests are necessary, the grower should obtain the services of a professional laboratory with experience in these techniques. Furthermore, these tests are not practical for use by the nurserymen who grow thousands of trees or hundreds of thousands of trees. One should use root stocks which are highly nucellar, such as Troyer and Carrizo citrange, Rough lemon, sweet orange, etc. Using root stocks like Sacaton citrumelo and Citrus taiwanica, which only have about a 50% nucellar rate, can provide nothing but trouble. The nurseryman cannot possibly effectively discard all the zygotes and hence, tree variation and poor performance in the orchard is to be affected. The sweet orange, Citrus sinensis, achieved prominence only in California .

In Southern California during the period from 1900-1950 it was probably the principal root stock in use, except on heavy soils and the sandy soils of the Coachella Valley. Since 1950, its use as a root stock has steadily declined so that by about 1970 it was seldom planted, but is in certain situations. As a root stock, its performance in California was very satisfactory, with all major commercial scions performing comparable to those same scion trees on sour orange or better. Such trees were superior in performance to trees on Rough lemon, grapefruit, Cleopatra mandarin, trifoliate orange, etc., see Webber , Wutscher , and Castle . Sweet orange was used to a very limited extent in Florida and Australia. It was well tested in South Africa by Marloth , but never achieved acceptance. The sweet orange consists of a homogeneous group of cultivars, one of the most uniform of the citrus species. Maximum variability may exist between the Valencia and the navel, the red- pigmented “blood” oranges , the pink pigmented varieties , the blond oranges, the acidless oranges, and the seedy varieties that have been used for root stocks. They are commonly and collectively referred to in California as ‘sweets,’ ‘seedy sweets,’ or ‘Mediterranean sweets.’ According to origin, they may be ‘Blackman’, ‘Koethen’, ‘Hinckley’, ‘East Highland’, ‘Olivelands’, etc. In Florida, seedy varieties like ‘Pineapple,’ ‘Parson Brown,’ ‘Homosassa,’ and ‘Florida common’ were some of the sources used. The sweet orange varieties grow readily from seed,plastic gutter but the seeds are easily injured by drying out and must be handled carefully . All sweet orange varieties are highly nucellar , and require a minimum amount of rogueing in the seed beds and nursery seedlings to remove the variants. The seedlings grow more slowly, when young, than those of the sour orange, and produce low-branching bushy trunks which may require more shaping in the nursery prior to budding than seedlings of Rough lemon, Troyer citrange, Cleopatra mandarin and others. The scion buds, after insertion, grow rapidly and produce large budlings with most scions —but not as large as on Rough lemon, Rangpur lime, Alemow, etc. The sweet orange may be grown from cuttings more easily than the sour orange, but not as easily as cuttings of the Rough lemon. As a root stock, the sweet orange in California is only medium in cold resistance, more hardy than Rough lemon, but maybe not as good as sour orange. When mature trees are frozen to the ground, as in a Florida freeze, it sprouts readily from the base of the trunk.The sweet orange usually gives normal bud unions with all varieties of sweet orange, mandarin, grapefruit, lemon and lime. That is to say the stock and scion are usually nearly equal in size, particularly with young trees. With older trees, there may be a tendency for a slight bulge at the union and the stock may be slightly smaller than the scion, but not to the degree attained on sour orange stock. This is generally true with grapefruit and lemon scions. In some instances, Eureka lemon trees on sweet orange may show a slight bud union overgrowth. For illustrations of these bud unions, see Figure 76 [Image could not be located]. The sweet orange grows fairly well on some heavy soils, but is best adapted to growth on rich sandy loams. Trees on sweet orange do not do well on very sandy soils such as in the Coachella Valley of California, nor extremely heavy or calcareous soils such as Porterville or Ducor adobes. On heavier soils that are poorly drained, the trees may develop severe iron chlorosis symptoms, and this is also true on the calcareous soils. Embleton, et al. and Wutscher found that trees on sweet orange root may have higher leaf levels of nitrogen, phosphorous and copper than trees on other root stocks.

The trees of sweet orange root stock do not commonly develop a well differentiated tap root like the sour orange root stock and is usually moderately shallow rooted, rarely penetrating to the depth that sour orange root stocks do, although an occasional lateral may do so. The sweet orange does, however, develop an abundant system of lateral roots which generally penetrate deeper than those of Rough lemon in California soils. Illustrations of these root systems may be found in the section on roots. Navel orange cuttings produced only a few very large surface laterals which have little penetration. On the other hand, Valencia orange cuttings produce a more abundant root system similar to those obtained on sweet orange seedlings. There would appear to be no disadvantage in using Valencia orange cuttings for orchard trees in areas where sweet orange could satisfactorily be used as a root stock. Trees on sweet orange root stock are large and vigorous, producing standard sizes in combination with all commercial citrus varieties in California. In most areas of California, the sweet orange trees are larger than those on sour orange root stock, but are smaller than trees on Sampson tangelo or Troyer citrange. Also, in California, they have, on the better soils, produced larger trees than those on Rough lemon, but in the sandy soils of Florida they are generally smaller. Yields on sweet orange root stock are good, generally among the higher echelon with all scion varieties except with navel oranges, where trees on sour orange, Troyer citrange, or the non-commercial Morton citrange out-yielded them. Rios Castaño, Torres, and Camacho report that orange trees on sweet orange root stock in Columbia were low in production, but offer no explanation. The sweet orange combinations are not as precocious in bearing as trees on trifoliate orange or Alemow, but the trees under favorable situations are long-lived and bear well into the advanced age of 50-60 years or longer. One orchard at the Citrus Experiment Station, Riverside must be 80 years and is still productive, although tree size is getting out of hand. There have been few loses from gummosis in the old orchard, but some trees have declined from psorosis. In California, fruits on the sweet orange stocks mature at the normal season for the variety; they are thin skinned, juicy, and of high quality, and hold up well in all physical and chemical characters to the extreme end of the long harvesting season. Percent juice, soluble solids and citric acid content of the fruits are essentially identical to those obtained on sour orange—with all varieties and in all areas of California . Wutscher , however, reports that the acid content of fruit on sweet orange root stock in Texas was higher than that of fruit on sour orange. This is not true in California. Fruits on sweet orange root stock are thus intermediate in quality, being superior to those grown on Rough lemon, sweet lime, or Alemow. They are, however, of poorer quality than those grown on trifoliate orange or Troyer and Savage citranges. Granulation of the fruit is generally not a serious problem as compared to other stocks.

Low-producing orchards had fewer feeder roots in the row middles as well as under the trees

The root terminated 9.1 M from the tree trunk and was 11.3 M long. At no place was the root more than 46 cm below the ground surface, and at the free end was only 15 cm below the surface. In the imperfectly drained east coastal soils of Florida, Ford reported that stabilizing the water table at a lower level increased the total rooting area and the newly developed roots survived without periodic destruction. Lowering the water table from 76-178 cm doubled the quantity of feeder roots in four years and increased the size of the tree. Cahoon, Harding, and Miller found that the higher the tree yields, the more feeder roots found in the irrigated row middles. In several high-producing orchards the amount of roots found between trees actually exceeded those found under the trees.Cahoon, Huberty, and Garber report on a differential furrow irrigation treatment applied to a Washington Navel orange orchard from 1934-1957 on sweet orange stock. The treatments were frequent versus infrequent. In 1957 root samples were taken to a depth of 122 cm. Trees irrigated on a frequent schedule produced fewer deep roots than the trees irrigated on an infrequent interval. The difference was more evident at the 61-92 cm levels. Samish in Israel reported essentially the same thing. Cahoon and Stolzy in California used a neutron moderation method to estimate root distribution as affected by irrigation and root stocks. They encountered troublesome problems with soil moisture variability,square plant pots soil profiles, etc. Ford found poor root growth in the leached zone of certain acid soils of the imperfectly drained Florida flat woods.

In laboratory tests poor root growth was not corrected by the application of adequate water and nutrients. In laboratory tests the Rough lemon produced the best feeder and lateral roots, even better than sour orange. Damage to the roots was more severe at low pH 5.0. Roots of Cleopatra were severely damaged at high and low pH 5.0-6.5. Ford also states that the relatively poor feeder root growth of trifoliate orange together with the root damage that occurred at pH 5.0 when flooded suggests this stock should be carefully evaluated. On the other hand, the satisfactory tolerance of Rangpur lime to flooding warrants further study. Ford says the citrus root system is capable of rapid and deep growth in sandy soils but will not grow into or exist long in a soil saturated with water. When the water table is within 60 cm of the surface, roots are confined to a shallow zone. Fluctuating water tables have a pronounced affect on the root system. He compared roots from trees in orchards with 1.8 M deep drain lines to an adjacent undrained orchard. In the undrained orchard the highest per cent of roots were at 0-50 cm, less at 25-50 cm, and almost none below 50 cm. In the drained orchard there was good rooting to 50 in. and some rooting even to 180 cm, but less as the distance from the chain line increased. Stabilizing the water table at a lower level increased the total rooting area and newly developed roots survived. Lowering the water table from 75 cm to 180 cm doubled the quantity of feeder roots in four years and increased the size of the trees. The feeder root concentration in the deep rooting zone 75-180 cm was greater than in the 0-25 cm level. In Israel, Cossman established a close correlation between the vigour of stocks on sandy soil and the osmotic pressure of their root cells. The slow-growing group represented by pummelo, grapefruit, and sour orange have remarkably low figures for their osmotic pressure.

The roots of this group are easily outclassed by the retentive forces of the soil particles whenever the wilting range is approached. In Texas, Adriance stresses the importance of the tap root system of sour orange, but points out that the major portion of the root system was between 46-61 cm. He emphasizes the importance of environment, natural habitat of species, aeration, water table, salt content and stratified soils. Adriance and Hampton examined the root systems of trees on sour orange grown on different soil types and subjected to different cultural practices. A poor-stunted tree grown on a very dense and compact soil had a spread of lateral roots 1.8-2.1 M. There were few roots 1.2-2.5 cm in diameter in the upper 21 cm of soil, a minimum of fibrous roots down to 61 cm, and no roots below that zone. A medium sized tree grown on a compacted soil and underlain with caliche at 125 cm showed roots were small but up to 2.5 cm in size and were well distributed although they did not penetrate deeply. A large tree grown on a good textured soil to a depth of 152 cm had roots down to 152 cm and below. Another tree in a tilled orchard which was disked to a depth of 10 in. had good roots around the tree, but there was little lateral spread beyond that distance. A tree under nontillage had roots with a lateral spread of 5.5-6.1 M. In California, Crider found that citrus roots were found to be distributed largely according to the character and the previous cultivation treatment of the soil. In the case of a 25-year-old tree there were practically no roots below 1.2 M due to a tenacious subsoil. A 30-year-old tree on a dry sandy soil showed good root development to a depth of 2.7 M. In a well-cultivated and fertilized orchard, with young trees 3-6 years old, 50 per cent of the roots were in the first 46 cm of soil. On the other hand older indifferently handled trees showed greater root accumulation in the 30-60 cm and 60-90 cm layers.

Young stresses the importance of soil texture, drainage, aeration, and moisture relationship to citrus root development. In Florida, Ford observed Hamlin and Valencia oranges on Cleopatra and Rough lemon at 15-21 years of age growing on a red sandy clay some 46 cm-4.8 M below the soil surface. Trees on the Cleopatra were 46-92 cm taller than the trees on Rough lemon where the roots penetrated into the clay. The height of the trees on Rough lemon decreased as the clay was closer to the surface. A restriction in root growth imposed by the clay did not consistently increase feeder root concentration above the clay. Root growth ceased when the clay percentage was above 28 per cent. Feeder root concentration of 15-year-old Hamlins and Valencia on Cleopatra growing in deep sandy soil was greater than Rough lemon, even though the trees were smaller than the same scions on Rough lemon. At Riverside, much of the area occupied by the citrus root stock trials initiated by Webber was underlain with impenetrable hardpan. At one location where the hardpan was approximately 1 M from the surface, some of the deep-tap-rooted trees such as sour orange had their tap roots growing down to the hardpan and then fusing together in a solid plate like a pedestal and then the roots diverged at a lateral angle. In the Azusa-Covina area of California where many of the soils are alluvial sandy loams, especially adjacent to washes which were subject to flooding and were underlain with sand and gravel substrata. In such soils where the alluvium was deep, the roots of sour orange penetrated to a depth of 2 M or more with few laterals. However,vertical farming equipment as the trees in the orchard approached the stream bed the sour orange roots penetrated only to a depth of 60 cm or less with no tap roots, but a well developed system of surface laterals . Fertilizers and nutrition also play a big part in citrus root development. In Florida’s deep sandy soils, Ford, Reuther, and Smith found nitrogen was the primary element influencing root development in two fertilizer experiments after six years of differential treatment. The high nitrogen plots had 37 per cent less feeder roots than the low nitrogen plots to a depth of 1.5 M. Neither potassium or magnesium had any appreciable effect on root development. In the second plot there were 38 per cent less feeder roots at 13-89 cm in the high nitrogen regime as compared to low nitrogen levels. They felt a direct salt concentration [Check” appear here in typescript in the margin of the manuscript] was responsible for the effects. In California, Cahoon et al. examined the effects of various types of nitrogen fertilizer on root density and distribution as related to water infiltration in a long-term fertilizer experiment on a sandy loam soil. They found that various nitrogen treatments, particularly the long-term application of sodium nitrate, and ammonium sulfate reduced root concentrations in the first 10 cm of soil. In tropical Trinidad Gregory found the root systems of Marsh grapefruit on sour orange were more vigorous and extensive on manured trees than unmanured trees. The manured trees had several lateral roots which exceeded the average spread of the branches and extended 106 cm from the trunk on 3-year-old trees. Most were shorter, and feeder roots occurred 8-46 cm from the trunk. The unmanured trees had shorter roots and the main feeding roots were only 8-31 cm from the trunk.

In Florida’s deep sandy soils, Spencer found that phosphate applications markedly reduced the concentration of feeder roots, especially in the surface 30 cm of soil. Reductions in root growth were not noted in the deeper soil zones even at the highest phosphate rates. Similar observations were made by Smith and Ford . Smith and Specht suggested an increase of iron chlorosis in Florida was mainly caused by an accumulation of copper in the soil with consequent root damage. Since copper accumulates primarily in the top soil they suggested trees became chlorotic because of root damage in that area. Chelated iron applied to seedlings in soil solution did not overcome the stunted root system associated with high copper levels. Ford had shown that in Florida 70 per cent of the feeder root system of healthy trees growing in deep sandy soils are located below 25 cm. Ford found that feeder root damage in orange trees affected with severe iron deficiency was not confined to the topsoil. Feeder root damage like copper toxicity was found to a depth of 1.5 M in groves located near lakes and swamps. Soil pH in the 0-25 cm zone was below 1.5 M with the subsoil at pH 3.9-4.4. All the groves had a high concentration of copper in the topsoil. The application of FeEDTA chelate to chlorotic trees which showed extreme root damage to a depth of 1.5 M resulted in pronounced new growth of roots in the subsoil. The increase of root growth was proportionately greater with an increase in depth so that often there were more new roots in the 75-150 cm zone than the 25-75 cm zone. Where iron chelate resulted in new leaf and shoot growth there was a corresponding increase in feeder root growth which occurred mostly below the 25 cm depth. If feeder roots were found in the 0-25 cm zone under chlorotic trees, treatment resulted in an increase of the number of feeder roots on the laterals. If there were no lateral roots in the surface 25 cm, then no new feeder roots were present after treatment. Changes in soil pH greatly influenced the distribution of feeder roots throughout the entire root profile. Ford says that in general, root concentration is highest when nutrition elements are low but not deficient. At high levels of applications of fertilizers the concentration of roots is reduced for all major elements. The correlation did not apply to the micro-elements. A deficiency of iron severely reduced the root system and an excess of copper and manganese prevented growth of the feeder roots due to toxicity in certain soil horizons. He suggested that from the standpoint of the root system the lowest level of nutrients consistent with high yield and healthy trees was the best. The effect of excessive accumulations of micro nutrients like Cu, Zn, and Mn on mycorrhiza and in turn, on root development, has yet to be fully evaluated.

W-SUDs was used on average twice per week during the 8-week program

Demographic items included self-reported sex, race and ethnicity, age, marital status, employment status, residential zip code, and sheltering-in-place status given the COVID-19 pandemic. The Alcohol Use Disorders Identification Test-Concise , a widely used 3-item self-report measure based on the 10-item original AUDIT, assessed hazardous or harmful alcohol consumption in the past 3 months. A score of 4+ for men and 3+ for women indicated significant problems with alcohol consumption. The AUDIT-C has been found to be a valid screening test for heavy drinking and/or active alcohol abuse or dependence. The Drug Abuse Screening Test-10 , a 10-item self-report measure adapted from the 28-item DAST, assessed consequences related to drug abuse, excluding alcohol and tobacco in the past 3 months. The last item of the DAST-10 regarding medical problems resulting from drug use was not reassessed because it was an exclusion criterion in the study screener; hence, the total possible range for the sample was 0-9, not 0-10. Total scores of 3+ indicated significant problems related to drug abuse. The DAST-10 has moderate test-retest reliability, sensitivity, and specificity. For the AUDIT-C and DAST-10 measures at post treatment, the reference period was the past 2 months, to reflect the period of intervention. Craving was assessed with a single item asking, “In the past 7 days, how much were you bothered by cravings or urges to drink alcohol or use drugs?”, with response options of not at all , a little bit , moderately , quite a bit , and extremely . The Brief Situational Confidence Questionnaire, a state-dependent measure,vertical farms assessed self-confidence to resist the urge “right now” to drink heavily or use drugs in different situations reported on visual analog scales anchored from 0% “not at all confident” to 100% “totally confident.”

The Patient Health Questionnaire-8 item , an 8-item scale, assessed depressive symptoms, and the Generalized Anxiety Disorder-7 item , a 7-item scale, assessed symptoms of generalized anxiety disorder. Both the PHQ-8 and GAD-7 have good internal consistency and demonstrated convergent validity with measures of depression, stress, and anxiety. A total of 2 items assessed the history of therapy for mental health or substance use concerns. Lifetime psychiatric diagnoses were assessed using 10 items plus a write-in option for others. A single item assessed currently taking prescribed medications for a psychiatric diagnosis. The treatment feasibility and acceptability of W-SUDs were assessed post treatment using the Usage Rating Profile-Intervention Feasibility and Acceptability scales, the 8-item Client Satisfaction Questionnaire-8 questions, and the 12-item Working Alliance Inventory-Short Revised. The URP-I item response options ranged from strongly disagree to strongly agree; the items were summed for a total score within each scale, with one feasibility item reverse coded. The CSQ-8 items have 4-point rating scales with response descriptors that vary. Internal consistency exceeds 0.90, and the total sum score ranges from 8 to 32, with higher total scores indicating higher satisfaction. The WAI-SR has three 4-item sub-scales, with 5-point rating scales, that reflect development of an affective bond in treatment and level of agreement with treatment goals and treatment tasks. Serious adverse events occurring in the 8 weeks after the start of the study were assessed for hospitalization related to substance use, suicide attempt, alcohol or drug overdose, and severe withdrawal . Positive endorsements were followed up with questions about the timing, diagnosis, and resolution. If additional details were needed to determine whether the event was study related, a team member reached out to the participant. Serious adverse events were reported to the study’s Data Safety Monitoring Board within 72 hours of the team learning of the event. Participants’ W-SUDs app use, including days of app use, number of check-ins, and number of messages sent, was collected via the Woebot app, as were module completion rates, lesson acceptability ratings indicated on a binary scale , and mood impact after tools utilization . In addition, on a daily basis, the W-SUDs app assessed mood, cravings or urges to use, and pain. In-the-moment emotional state was reported through emoji selection with a default menu of 19 total moods, including options for negative , positive , and average mood , with an additional ability to type in free text emotion words and/or self-selected emoji expressions. Cravings were assessed as not at all , a little bit , moderately , quite a bit , or extremely .

Physical pain was rated on a scale of 0 to 10. Descriptive statistics were used to describe the sample and examine the ratings of program feasibility and acceptability. Paired samples ttests and McNemar nonparametric tests examined within-subject changes from preto post treatment on measures of substance use, confidence, cravings, mood, and pain. Change scores were calculated , and bivariate correlations were used to examine associations between changes in AUDIT-C and DAST-10 scores and changes in use occasions, confidence, and depression and anxiety scores. ttests were conducted to examine changes from pre- to post treatment in substance use, confidence, mood, and pain by whether participants were currently in therapy or taking psychiatric medications. Post treatment survey completion was 50.5% , with better retention among those with a higher CAGE-AID score at screening . Retention was lowest among those with a CAGE-AID score of 2 and higher for those scoring 3 or 4 . Retention was unrelated to participant demographic characteristics, previous use of Woebot, psychiatric diagnoses, primary problematic substance, depressive symptoms, pain, cravings, confidence, substance use occasions, AUDIT-C scores, or DAST-10 scores . Missing data on individual survey items was minimal. In a single instance, a participant’s average score values were imputed when missing 1 item on the PHQ-8. Participants were prompted to report craving and pain ratings within the W-SUDs app on a daily basis. The data were aggregated so that if participants provided multiple ratings within a day, the scores were averaged. To examine changes over time, generalized estimating equationlinear models were run with week entered as a factor, setting week 1 as the reference category. A total of 1571 mood ratings were entered into the W-SUDs app by 90 of the 101 participants, with each participant entering on average 17.5 mood ratings or 2.2 per week. A total of 1399 craving and 1403 pain ratings were entered into the W-SUDs app by 87 of the 101 participants , with each participant providing an average of 16.1 ratings for cravings and 16.1 ratings for pain. Table 2 shows the number of participants providing craving ratings for each week and summarizes the generalized estimating equation model analyzing craving ratings over time. Compared with week 1, craving ratings were significantly lower at weeks 4 through 9. By weeks 8 and 9, craving ratings were reduced by approximately half of the sample’s mean rating at week 1.

In contrast, pain ratings did not differ significantly by week and over the 9 weeks averaged, on a scale of 0 to 10.W-SUDs, an automated conversational agent,vertical plant tower was feasible to deliver, engaging, and acceptable and was associated with significant improvements pre- to post treatment in self-reported measures of substance use, confidence, craving, depression, and anxiety and in-app measures of craving. The W-SUDs app registration rate among those who completed the baseline survey was 78.9% , comparable with other successful mobile health interventions. As expected, the use of the W-SUDs app was highest early in treatment and declined over the 8 weeks. Study of engagement with digital health apps has been growing, with no consensus yet on ideal construct definitions. Simply reporting the number of messages or minutes spent on an app over time may undermine clarity and genuine understanding of the type and manifestation of app utilization related to clinical outcomes of interest. Further research in this area is warranted. The observed reductions from pre- to post treatment measures of depression and anxiety symptoms were consistent with a previous evaluation of Woebot conducted with college students self-identified as having symptoms of anxiety and depression. Furthermore, in this study, treatment-related reductions in depression and anxiety symptoms were associated with declines in problematic substance use. Declines in depressive symptoms observed from pre- to post treatment were greater among the participants in therapy. This study also examined working alliance, proposed to mediate clinical outcomes in traditional therapeutic settings. Traditionally, working alliance has been characterized as the cooperation and collaboration in the therapeutic relationship between the patient and the therapist. The role of working alliance in relationally based systems and digital therapeutics has been previously considered; the potential of alliance to mediate outcomes in Woebot should be further validated in future studies adequately powered to examine mediators of change. Measures of physical pain did not change with the use of W-SUDs as reported in pre- and post treatment measures or within the app; however, the sample’s baseline ratings of pain intensity and pain interference were low. Although not a direct intervention target, pain was measured due to the potential for use of substances to self-treat physical pain and the possibility that pain may worsen if substance use was reduced, which was not observed here. Within-app lesson completion and content acceptability were high for the overall sample, although there was a wide range of use patterns. Most participants used all facets of the W-SUDs app: tracked their mood, cravings, and pain; completed on average over 7 psycho educational lessons; and used tools in the W-SUDs app. Only about half of the sample completed the post treatment assessment, with better retention among those screening higher on the CAGE-AID. That is, those with more severe substance use problems at the start of the study, and hence in greater need of the intervention, were more likely to complete the post treatment evaluation. None of the other measured variables distinguished those who did and did not complete the post treatment evaluation. This level of attrition is commensurate with other digital mental health solution trial attrition rates. By addressing problematic substance use, including but not limited to alcohol, the W-SUDs intervention supports and extends a growing body of literature on the use of automated conversational agents and other mobile apps to support behavioral health.

A systematic review of mobile and web-based interventions targeting the reduction of problematic substance use found that most web-based interventions produced significant short-term improvements in at least one measure of problematic substance use. Mobile apps were less common than web-based interventions, with weaker evidence of efficacy and some indication of causing harm . However, mobile interventions can be efficacious. Electronic screening and brief intervention programs, which use mobile tools to screen for excessive alcohol use and deliver personalized feedback, have been found to effectively reduce alcohol consumption and alcohol-related problems. However, rigorous evaluation trials of digital interventions targeting non-alcohol substance use are limited. Furthermore, although a systematic review concluded that conversational agents showed preliminary efficacy in reducing psychological distress among adults with mental health concerns compared with inactive control conditions, this is the first published study of a conversational agent adapted for substance use. Study strengths include study enrollment being double the initial recruitment goal, reflecting interest in W-SUDs. Most participants reported lifetime psychiatric diagnoses, and approximately half of the participants endorsed current moderate-to-severe levels of depression or anxiety. From pre- to post treatment with W-SUDs, participants reported significant improvements in multiple measures of substance use and mood. The delivery modality of W-SUDs offered easy, immediate, and stigma-free access to emotional support and substance use recovery information, particularly relevant during a time of global physical distancing and sheltering in place. More time spent at home, coupled with reduced access to in-person mental health care, may have increased enrollment and engagement with the app. Although further data on recruitment and enrollment are warranted, these early findings suggest that individuals with SUDs are indeed interested in obtaining support for this condition from a fully digitalized conversational agent. This study had a single-group design, and the outcomes were short term and limited to post treatment, thus limiting the strength of inferences that can be drawn. The sample was predominately female and identified as non-Hispanic White, and the majority were employed full-time. Non-Hispanic White participants reported higher program acceptability on 2 of the 4 measures compared with participants from other racial or ethnic groups. Future research on W-SUDs will use a randomized design, with longer follow-up, and focus on recruitment of a more diverse population to better inform racial or ethnic cultural programmatic tailoring, using quotas to ensure racial or ethnic diversity in sampling.