Elevated CO2 levels have been demonstrated to increase CH4 emissions in marsh ecosystems

Adoption of reduced tillage management systems often leads to increased carbon content and increased percentage of WFPS in soils, which in turn may support higher rates and duration of denitrification . Also, different types of nitrogen fertilizers produce different amounts of N2O under the same reduced tillage management systems, so fertilizer-tillage interactions must also be considered . With respect to effects of water content, Dobbie and Smith found that N2O emissions from arable soils increased about 30-fold when the WFPS was increased from about 60% to 80%. Smith suggested that, contrary to expected decreases in N2O emissions with decreased precipitation, increased temperature might stimulate respiration and lead to oxygen depletion. This would, in turn, increase the anaerobic volume in soil pore spaces and consequently favor denitrification and N2O emissions. Based on these estimates, N2O emissions from both arable lands and natural ecosystems in California may be increased by rising temperature, even though it is difficult to predict the exact amount of increases of N2O emissions. Although studies indicate a positive relationship between temperature, precipitation and denitrification, the actual predictions of how N2O emissions will respond to global warming are complicated and varies between models. For example, in the Denitrification-Decomposition model and Land Use Emissions submodels , the underlying assumption is there is a positive relationship between temperature and precipitation. However,blueberry packing box the CarnegieAmes-Stanford model assumes a negative feedback on N2O emissions as a result of climate change. Since denitrification in soils is regulated by many factors, as described above, and given the paucity of data on N2O emissions, it is difficult to predict the effect of climate change on N2O emissions.

A small change of N2O concentration can lead to a large difference in global warming potential since the GWP of N2O is 300 times higher than that of CO2. Thus, studying the effect of changes in temperature and precipitation on N2O emissions is a high priority for the state . To support these efforts, more precise methods of measuring N2O emissions in the field are needed. Methane is produced by the process of methanogenesis, under strictly anaerobic conditions, by microorganisms that either using hydrogen gas as an energy source with CO2 as an electron acceptor or that ferment acetate . The process occurs in the digestive system of ruminant herbivores as well as in soils that are saturated with water and therefore depleted in oxygen. Soil with high amounts of organic matter , found in California wetlands and the Delta area, are a particular class of soil with high potential for methane production . The current estimate of wetland acreage in California, areas where Histosols are the common soil type, is approximately 450,000 acres; some of these soils are under agricultural production . Even though these soils represent 0.4% of the state’s area, the GHG emissions from these soils could surpass those of the mineral soils, due to emission rates and the high global warming potential of CH4 and N2O. Temperature, irrigation, fertilization, available carbon, and seasonal variations are among the factors that influence production of methane in soil . This section focuses on methane formation in soils. There has been little consideration of the effects of temperature and CO2 on CH4 emissions in flooded rice fields and wetlands in California. Studies from other parts of the world provide insights regarding how methane emissions may respond to climate change.Watanabe et al. reported that temperature is a major factor causing seasonal variation in CH4 emission rates during continuous flooding and that higher cumulative temperature leads to higher total CH4 emissions.

Allen et al. suggested that elevated CO2 and higher temperatures increase CH4 emissions in flooded rice soils due to greater root exudation or root sloughing mediated by increased seasonal total photosynthetic CO2 uptake. Other studies, however, have not observed positive correlations between temperature and methane emissions . Clearly, more extensive studies of gas fluxes from flooded ecosystems are needed to predict potential effects of elevated temperature and CO2 on CH4 emissions in California’s flooded rice fields and wetlands. The potential effects of climate change on GHG emissions are quite diverse and controlled by numerous factors . However, due to the longterm legacy of today’s GHG emissions, it is sensible to formulate alternatives to adapt to climate change. At present, formulating adaptation strategies for California agriculture to increasing GHGs concentrations in the atmosphere is based on theory and extrapolations from global scale assessments . Our aim should be to decrease negative impacts, promote any potential positive impacts that may result from these adaptive strategies, and reduce environmental and social pressures that increase vulnerability to climate variability . Wilkinson has identified a series of “no regret” adaptation strategies, i.e., increased water use efficiency; limiting the footprint of development on the landscape, particularly in vulnerable habitats such as wetlands and areas subject to fires, floods, and landslides; creating nature reserves designed to accommodate future climate changes and necessary range shifts and migrations of plants and animals; reducing urban heat island impacts; and using permeable pavements so that storm water runoff can be used to recharge groundwater systems . Agricultural, as well as forest and rangeland, soils have the potential to mitigate climate change by serving as sinks for GHGs due to their ability to store C in the form of organic matter. This section will now focus on agricultural management practices that can enhance the utilization of agroecosystems as GHG sinks. It will also consider how interactions among different GHG must be considered in developing management strategies.At present there is limited quantification of the mitigating effect of various agricultural practices in California . This quantification is an essential step that has to be taken towards planning, policy making and further implementation of GHG mitigation practices. A major source of uncertainty regarding GHG in California is associated with our incomplete understanding of the sources and mitigation potential of CH4 and N2O, both of which are GHG with high global warming potential. Our estimates of N2O fluxes from California agricultural soils are particularly poor and difficult to model, derived from the fact that so little data have been collected in California and that the environmental drivers of this process are not well understood. In addition, there is a need for specific research on GHG emissions from Histosols ,package of blueberries not only on current emission rates, but also on the potential that wetland restoration could have in reversing these negative impacts. The consequences of changes in management practices to achieve mitigation of GHG need to be better characterized in California soils; Table 3.2 summarizes some potential issues. This information is essential for evaluating the trade-offs of different mitigation strategies both with respect to GHG and, beyond that, with the system as a whole. This information is urgently needed to develop effective and efficacious management plans and policies. Research on some of these issues is ongoing as part of the research program of the Kearney Foundation of Soil Science.Air quality refers to the clarity, smell, and even taste of the air surrounding us. Atmospheric trace gases are involved in many interactions which determine air quality, as they are heated by solar radiation, transported by wind, and scrubbed out of the atmosphere by rain. Air quality affects climate change in two main ways: by altering the atmospheric greenhouse effect and by reducing the amount of solar radiation reaching the earth’s surface . In addition, aerosols can also have local effects , and ozone production that directly reduce crop productivity .

Air quality in California is impacted by emissions and their regulation both within and outside the state. Such regulations vary between states and regions, and countries, both in terms of their stringency and the types of emissions. Agriculture is impacted by climate change , air quality , and also by the interaction between these two phenomena . For instance, O3 formation is positively correlated to temperature, can directly interfere with plant metabolism , and can indirectly compromise symbiotic relationships within plant communities by reducing the diversity of mycorrhizal communities , and potentially their functionality. Agriculture also can be an important determinant of air quality. Recognizing agriculture’s impact on air quality, California has placed stringent regulations on agriculture, relative to the rest of the United States, to comply with the Clean Air Act of 1990. For instance, the California Air Resources Board recently tightened rules governing emissions from engine-powered irrigation pumps , to curb smog-forming constituents and GHG emissions. The direct effects of the two most prominent greenhouse gases on GWP and radiative forcing have been addressed above in Section 3. This section reviews the effects of O3, aerosols, and oxides of nitrogen on climate and agriculture in the California context.Ozone has potentially wide-ranging but only moderately well–understood effects on climate change due to the non-linear and multiplicative interactions involved in its formation. In the upper troposphere, O3 effectively reflects UV radiation, and in the lower atmosphere, in addition to being harmful to human health, O3 is a well documented phytotoxin . Ozone enters plants through stomata and disrupts biochemical functioning, leading to decreased productivity, lowered fertility, and accelerated senescence , all of which can cause significant economic losses within California croplands . Integration of agricultural crop production models and a motor-vehicle emissions model, revealed significant crop production losses in different commodities at regional and national levels. A complex series of reactions between VOCs and NOx gases , catalyzed by sunlight and heat, produce O3 . At low concentrations, tropospheric NOx causes a net destruction of O3, while at higher concentrations it causes a net production. However, global O3 production efficiency is greater at lower NOx concentrations, and anthropogenic NOx is known to contribute less than natural sources to global O3 budgets . Furthermore, background GHG concentrations, such as CO2 and CH4, could accelerate O3 formation through radiative forcing, and the ratio of NOx to VOCs can be more important in determining O3 production than absolute concentrations . This myriad of reactions can confound local air quality initiatives that do not account simultaneously for NOx and VOC concentrations as O3 precursors. Such local effects may be of particular significance to agriculture, given the phytotoxicity of O3 . The above interactions and their impacts upon agriculture are as yet poorly understood, however, it is apparent that an integrated approach to the regulation of these various atmospheric constituents affecting air quality is necessary.The preponderance of current research agrees that the overall impact of aerosols on climate change is a cooling effect, countering the warming effects of GHGs . Aerosols directly influence the climate system by absorbing and scattering solar radiation and indirectly by providing cloud condensation nuclei . An increase in cloud droplet concentrations and a reduction in cloud droplet size is often observed when clouds are fed by air with enhanced levels of CCN, creating the so called ”first indirect effect,” through which aerosols effect climate . As a result, the optical properties of clouds are rendered more reflective, and solar insolation is increased. However, the magnitude of aerosol cooling is poorly quantified; a recent study found increased solar insolation over the past 15 years, and attributed this result to improvements in air quality . Given the projected increases in the population of California and consequential aerosol production coincident with urbanization , the effects of increased CCN may be of particular significance to climate and California agriculture, especially in the Central Valley, where much of the urbanization is projected to take place. In addition to increased cloud reflectivity, an increase in CCN has been found to decrease the average radius of atmospheric liquid-water droplets, which hinders cloud droplet coalescence and retards precipitation, leading to the ‘second indirect effect’ through which aerosols effect climate . Thus, aerosols can decrease the precipitation efficiency of clouds, notably decreasing incidence and amounts of precipitation . Increased aerosol concentrations downwind of major urban centers in both California and Israel are suggested to have reduced precipitation by 15%-25% during the last several decades, based upon calculations of the orographic enhancement factor using a time course of precipitation data downwind and side wind of urbanized areas .

Sample individuals who had left the original study area were tracked throughout Kenya

Note that it is also possible that one might observe positive migration flows into non-agricultural employment even in the case where the true average productivity gap, was negative; in such a case, movers would consist of those with particularly large and positive individual returns to non-agricultural relative to agricultural employment in that time period, or perhaps those who face sufficiently large idiosyncratic preferences for the move, say, those with negative.By this logic, fixed effects estimates will be generally larger than the average population treatment effect. This suggests that estimated gaps based on those who were initially in the agricultural sector are likely to be upper bounds on the magnitude of the true average productivity gap in the population as a whole. Hendricks and Schoellman make a closely related point, arguing that their estimates of the returns to international migration are likely to be upper bounds. In this study, this will likely be the case with the Kenya data where the entire sample lived in rural areas at baseline. In the Indonesia data , which features sorting in both directions , it is in theory possible to observe a non-agricultural premium every time an individual selects into non-agriculture and an agricultural premium every time an individual selects into agriculture. By a parallel logic to above, the selection equation in equation 6 suggests that among those initially working in the non-agricultural sector,garden grow bags we would only observe moves among those that benefit from working in agriculture, i.e. The resulting estimates would then serve as lower bounds on the magnitude of the true average productivity gain to non-agricultural employment. The IFLS provides an ideal test bed to understand the role of these biases in estimating the related urban-rural gap.

In the spirit of Young’s observation that migration flows in both directions, the data allow us to condition on individual birth location and measure the dynamic impacts on wages after migration. The bounding argument above predicts that the estimated urban-rural productivity gap would be larger when estimated for movers from rural to urban areas than it is when estimated for movers from urban to rural areas. We take this prediction to the data and find suggestive evidence for it. This model of selection implies that the true sectoral productivity gap in Indonesia is bounded by these two estimates, generated by movers in each direction. This paper uses detailed panel data from Indonesia and Kenya to estimate worker productivity gaps between the non-agricultural and agricultural sectors, as well as the closely related question of gaps between workers in urban and rural areas. The data we use from both countries is unusually rich, and the long-term panel data structure features high rates of respondent tracking over time. At 250 million, the Southeast Asian country of Indonesia is the fourth most populous in the world, and Kenya is among the most populous Sub-Saharan African countries with approximately 45 million inhabitants. These countries are fairly typical of other low income countries with respect to their labor shares in agriculture, estimated agricultural productivity gaps using national accounts data, and the relationships between these variables and national income levels.The high tracking rates of the datasets we employ allow us to construct multiyear panels of individuals’ location decisions. Moreover, both datasets include employment information on both formal and informal sector employment. The latter is difficult to capture in standard administrative data sources yet often employs a large share of the labor force in low-income countries. If informal employment is more common in rural areas and in agriculture, and is partially missed in national accounts data, this may generate an upward bias in measured sectoral productivity gaps.

Detailed employment data were collected during each survey round. In addition to current employment, the survey included questions on previous employment, allowing us to create up to a 21- year annual employment panel at the individual level from 1988 to 2008. Employment status and sector of employment are available for each year, but in the fourth IFLS round, earnings were collected only for the current job. Therefore, the panel has annual data on employment status and sector of employment from 1988 to 2008, and earnings data annually from 1988 to 2000 and in 2007-08. The IFLS includes information on the respondent’s principal as well as secondary employment. Respondents are asked to include any type of employment, including wage employment, self employment, temporary work, and unpaid family work.In addition to wages and profits, individuals are asked to estimate the value of their compensation in terms of share of harvest, meals provided, transportation allowance, housing and medical benefits, and credit; the main earnings measure we use is thus the comprehensive sum of all wages, profits, and benefits. Individuals are asked to describe the sector of employment for each job. The single largest sector is “agriculture, forestry, fishing, and hunting”: 34% of individuals report it as their primary employment sector, and 47% have secondary jobs in this sector. Agricultural employment is primarily rural: 42% versus 3% of rural and urban individuals, respectively, report working primarily in agriculture . Other common sectors are wholesale, retail, restaurants, and hotels ; social services ; manufacturing ; and construction . These non-agricultural sectors are all more common in urban than rural areas. Men are more likely than women to work in agriculture and less likely to work in wholesale, retail, restaurants, and hotels; and social services. Smaller male-dominated sectors include construction and transportation, storage, and communications . In the analysis that follows, we employ an indicator variable for non-agricultural employment, which equals 1 if a respondent’s main employment is not in agriculture and 0 if main employment is in agriculture. The main analysis sample includes all individuals who are employed and have positive earnings and positive hours worked to ensure that the main variable of interest, the log wage, is defined.

The sample includes 18,211 individuals and 115,897 individual-year observations.In addition to studying wage gaps, we explore consumption gaps to get a broader sense of welfare differences. IFLS consumption data were collected by directly asking households the value in Indonesian Rupiah of all food and non-food purchases and consumption in the last month, similar to consumption data collection in the World Bank’s Living Standards Measurement Surveys.In contrast to the retrospective earnings data in the IFLS, the consumption data are all contemporaneous to the survey. Consumption data were collected at the household level,tomato grow bags which we divide by the number of household members to obtain a per capita measure. The consumption sample includes 38,280 individual-year observations from 19,695 individuals in IFLS rounds 1–4. In the consumption analysis, we expand the sample to also include individuals without current earnings data; we also perform a robustness check on the consumption analysis using the main productivity sample. Data were collected on the respondent’s location at the time of the survey, and all rounds of the IFLS also collected a full history of migration within Indonesia. All residential moves across sub-districts that lasted at least six months are included. Figure 2, Panel A presents a map of Indonesia with each dot representing an IFLS respondent’s residential location. While many respondents live on Java, we observe considerable geographic coverage throughout the country. The IFLS also asked respondents for the main motivation of each move. Family-related reasons are most common at 46%, especially for women , who are more likely than men to state they migrated for marriage. The second most common reason to migrate is for work , with little difference by gender, while migrating for education is less common. We combine data across IFLS rounds to construct a 21-year panel, from 1988 to 2008 with annual information on the person’s location, in line with the employment panel; refer to Kleemans and Kleemans and Magruder for more information on the construction of the IFLS employment and migration panel. We utilize a survey-based measure of urban residence: if the respondent reports living in a “village”, we define the area to be rural, while they are considered urban if they answer “town” or “city.” We present the correspondence between urban residence and employment in the non-agricultural sector in Table 1, Panel A.

In 66 percent of individual-year observations, people are employed in the non-agricultural sector, and in 21 percent of the observations, they live in urban areas. One can see that a substantial portion of rural employment is in both agriculture and non-agricultural work, while urban employment is almost exclusively non-agricultural, as expected. Given the migration focus of the analysis, it is useful to report descriptive statistics both for the main analysis sample, as well as separately for individuals in four mutually exclusive categories : those who always reside in rural areas throughout the IFLS sample period , those who were born in a rural area but move to an urban area at some point , those who are “Always Urban,” and finally, the “Urban-to-Rural Migrants” . As discussed above, the fixed effects analysis is driven by individuals who move between sectors during the sample period. In the main IFLS analysis sample, 80 percent of adults had completed at least primary education, and a quarter had completed secondary education, while tertiary education remain quite limited, at less than 10 percent. Among those who are born in rural areas in columns 2 and 3 , we see that migrants to urban areas are highly positively selected in terms of both educational attainment, and in terms of cognitive ability, with Raven’s Progressive Matrices exam scores roughly 0.2 standard deviation units higher among those who migrate to urban areas, a substantial effect.Migration rates do not differ substantially by gender. These relationships are presented in a regression framework in Table 3, Panel A , and the analogous relationships for moves into non-agricultural employment are also evident . Importantly, the relationship between higher cognitive ability and likelihood of migrating to urban areas holds even conditional on schooling attainment and demographic characteristics , at 99% confidence. This indicates that sorting on difficult to-observe characteristics is relevant in understanding sectoral productivity differences in this context. It is worth noting that if we ignore migrants, individuals who are born and remain in urban areas are far more skilled than those who stay in rural areas. “Always Urban” individuals score over 0.4 standard deviation units higher on Raven’s matrices and have triple the rate of secondary schooling and six times the rate of tertiary education relative to “Always Rural” individuals. The urban-to-rural migrants in Indonesia are also negatively selected relative to those who remain urban residents, which corroborates Young’s claim. These patterns emerge in Table 2, Panel A, where the urban-to rural migrants score lower on all skill dimensions relative to those who remain urban; appendix Tables A1 and A2 report results analogous to Tables 3 and 4, among those individuals born in urban areas. The Kenya Life Panel Survey includes information on 8,999 individuals who attended primary school in western Kenya in the late 1990s and early 2000s, following them through adolescence and into adulthood. These individuals are a representative subset of participants in two primary school based randomized interventions: a scholarship program for upper primary school girls that took place in 2001 and 2002 and a deworming treatment program for primary school students during 1998–2002 . In particular, the KLPS sample contains information on individuals enrolled in over 200 rural primary schools in Busia district at the time of these programs’ launch. According to the 1998 Kenya Demographic and Health Survey, 85% of children in Western Province aged 6–15 were enrolled in school at that time, and Lee et al. show that this area is quite representative of rural Kenya as a whole in terms of socioeconomic characteristics. To date, three rounds of the KLPS have been collected . KLPS data collection was designed with attention to minimizing bias related to survey attrition. Respondents were sought in two separate “phases” of data collection: the “regular tracking phase” proceeded until over 60 percent of respondents had been surveyed, at which point a representative subset of approximately 25 percent of the remaining sample was chosen for the “intensive tracking phase” . These “intensive” individuals receive roughly four times as much weight in the analysis, to maintain representativeness with the original sample.

Young’s interpretation differs from GLW in emphasizing the role of selective migration across sectors

However, the mechanisms underlying environmental impacts on preterm delivery are still insufficiently understood and further experimental research is warranted. Pesticide exposures affected preterm birth in our study mostly in female children and to a lesser extent if at all males, similar to a Chinese study that found high levels of non-specific metabolites of organophosphate pesticides in maternal urine to have adversely affected duration of gestation only in girls . It has been suggested that exposures to pesticides in early pregnancy trigger more spontaneous abortions of male fetuses , or stillbirth in late pregnancy , outcomes not captured in our study. It is well known that the male fetus is more vulnerable in utero and is at greater risk of fetal death with the male-to-female ratio falling from around 120 male conceptions to 105 boys per 100 girls at birth . Some pesticides are endocrine disruptors such as those in the organophosphate family that mimic sex steroidal action and resemble estrogenic more than androgenic action in fish models . In general, we observed stronger ORs among infants born to Hispanic mothers partly because Hispanic mothers had higher exposure prevalence during pregnancy. According to a recent agricultural survey, about 90% of female farm workers in California were Mexico-born Hispanics ; thus, the foreign-born Hispanic mothers may live near fields where they work, making them more likely to be exposed to ambient pesticides when at home. Unfortunately,plastic flower pots information of specific occupations and occupational addresses of the mothers was not collected on birth certificates and therefore we could not determine exposures at workplaces.

Fetal growth restriction, the main reason for low birthweight other than preterm birth, has been associated with transplacental oxygen and nutrient transport, hypoxia, oxidative stress, placental inflammation, and inhibition of placental growth hormone ; these possible mechanisms may be influenced by toxic exposure to organophosphate and carbamate pesticides . We did not find much evidence for associations between term low birthweight and many specific pesticide exposures, in line with some previous studies. Others however, reported associations for low birthweight or a decrease in birthweight for some pesticides, including chlorpyrifos and/or diazinon, carbaryl, methyl bromide, as well as with organophosphate and pyrethroid metabolites measured in maternal urine . However, it also has been reported that when adjusting for gestational age associations with low birthweight were attenuated . Our results for pyrethroids are consistent with these previous observations for both preterm birth and low birthweight. Our term low birthweight results may have been under powered but still seem to corroborate a previous report that found residential proximity to methyl bromide use to reduce birthweight overall . Most previous pesticide and birth outcome studies examined exposures from home/garden or professional use of pesticides and relied on parental interviews after birth; these studies have been criticized for their potential selection or recall bias . Other studies using job exposure matrices may have been prone to non-differential exposure measurement errors, and often could not distinguish between types of chemicals. Smaller studies were able to employ biomarkers such as maternal blood or urine collected in pregnancy, or umbilical cord blood samples to measure prenatal chemical concentrations . The necessarily small size of such pregnancy cohorts limits the number of outcomes and hence study power considerably, and they also have to assume that chemical concentrations measure in bio-samples reflect exposures during multiple gestational windows accurately when many pesticides have relatively short half-lives, e.g., hours to a few days for organophosphates , and few studies have multiple bio-samples available throughout pregnancy.

Recently, several studies have examined the associations of ambient pesticide exposures and adverse birth outcomes in large populations based on proximity to applications modeling . These GIS-PUR based approaches applied to birth records avoids selection bias due to non-response and recall bias that threatens studies relying on interviews after births, in which mothers who had babies with adverse outcomes may be more likely to participate or recall their pesticide exposures. Similar to ours, these Californian studies were exclusively based on California’s PUR records and land use surveys. Particularly, two studies focused on the agriculturally dominated San Joaquin Valley; one assessed exposures by comparing high exposure to low exposure to pesticides with acute toxicity based on EPA signal word, while the other that reported negative associations with spontaneous preterm birth focused on exposures to frequently used chemicals and physicochemical groupings yet they mainly reported results by month in late pregnancy. Our sensitivity analysis stratified by season of conception supported our hypothesis that early but not late pregnancy is the critical period. Besides, different from previous studies, our method employed a four-tier mechanism to improve the match rate of PUR and land use maps , successfully reducing potential non-differential misclassification of exposure. We also expanded the study area to the California statewide to include more agricultural regions that might also be populous outside of the Central Valley while providing us with a large sample size and thus high statistical power in this record linkage-based design. Our study has some limitations. The ambiguous location, only at county level, of those non-agricultural pesticide applications in pesticide reporting, made it difficult to properly assess exposures for mothers living in urban areas at birth, whom were excluded to avoid substantial underestimation of exposure. However, our restriction to women living within 2km of fields might partially and indirectly ‘matched on’ location and generated a more homogeneous population in terms of potential geographically-specific confounders such as air pollution.

Similarly, since our study question is whether proximity to fields with agricultural pesticide applications increases risks of adverse birth outcomes, despite that other unassessed sources of pesticide exposure including occupational, home and garden use, or dietary exposures to pesticides could potentially confound our results, our ‘matching’ through restriction to women living within 2km of fields might have accounted for such factors. For example, the SUPERB study in northern California suggested that those who live near fields are more similar in their use of pesticides for other purposes, than residents in urban areas . Residents in the San Francisco metropolitan area, had a lower percentage of using outdoor pesticides than two inland areas in northern California they studied . Another assumption was that addresses at birth reflected the location of mothers over the entirety of pregnancy. A review of research on residential mobility during pregnancy showed that on average 24% of mothers move during pregnancy in the US ; although most moving distances were short , it may result in exposure misclassification in our GIS-based estimates based on a 2km buffer. Particularly, Hispanic mothers are more mobile than White mothers , increasing their chances of living close to the fields and receiving pesticide exposures during pregnancy. Similar to all other studies of live birth outcomes ours may also be subject to live birth bias, i.e., the fact that early exposures could lead to fetal loss. While data on the potential confounders maternal smoking and pre-pregnancy BMI, was only available for 4 out of 13 years of our study period, additional adjustment for these variables did not change our results more than minimally and suggests that they may not be confounders. In summary,plastic garden container this study found that first and second trimester exposures to most selected pesticides known or suspected to be reproductive toxicants were associated with preterm delivery but only one pesticide and perhaps pyrethroids as a class and were related to term low birthweight in California among women living near agricultural fields in California. These associations seemed stronger for female infants suggesting possible sex specificity for some of these agents.Epidemiologic studies of environmental exposures in early childhood, often assign exposures based upon the child’s or mother’s residence. While large-scale record-linkage based studies can avoid selection and recall bias that often impacts smaller studies with active subject recruitment, previous record-based studies often relied solely on maternal residential address at birth, which is readily available on many birth certificates and/or residential address at diagnosis, as done in some childhood cancer studies . The reliance on one address implicitly makes the assumption that a child’s residence remains the same throughout early childhood, or if they moved, that the exposure levels remained the same. Consequently, this may lead to exposure mis-classification for those who move in early childhood especially for exposures with high spatial heterogeneity. In a 2003-2007 California statewide representative survey, only 14% of all women moved in the 2-7 months post-partum , but with increasing age the frequency of residential moves also increased. For more than 50% of childhood cancer cases under age 5 diagnosed in California between 1988 and 2005, address at birth differed from the address at cancer diagnosis , which raises concerns about using residence at birth to assess exposures in early childhood. Exposure misclassification due to moving is a ubiquitous problem encountered by nearly all record-based studies that lack a complete residential history for each child. Previous studies suggested that residential mobility may be associated with certain risk factors for childhood cancers such as maternal age, marital status, parity, family income, and other socioeconomic status metrics , resulting in differential misclassification of exposures. While previous studies that examined residential proximity to exposure have mentioned the potential bias resulting from residential mobility during pregnancy , they rarely investigated the impact of residential mobility in early childhood on exposure measures or effect estimates.

While it is not feasible to acquire complete residential histories from interviews for subjects in large record-based studies as a gold standard to compare against the recorded birth or diagnosis address, databases containing public records of individuals collected by commercial companies have become available in recent years, allowing us to trace individuals without a self-reported residential history. For example, LexisNexis® Public Records , a commercial credit reporting company, provides all known addresses for a set of individuals upon request. Earlier validation studies have proven addresses acquired from LexisNexis to be useful for reconstructing residential histories for subjects in epidemiological studies with an overall match rate of ~70-85% with detailed address history obtained from interviews ; however, these subjects mostly consisted of mid-aged or older individuals, whose residential mobility may differ from that of women at child-bearing age. Such information, if of high quality, could potentially augment existing address information and help us to reconstruct residential histories for subjects in large record based studies and provide more accurate exposure estimates. The degree of exposure misclassification due to mobility depends on the distance moved, the spatial heterogeneity of the exposure , and the method of exposure assessment each study employed. For example, one study used ecological measures of agricultural activity at the county level , thus moving within a county would not alter exposure estimates. Other studies have assessed agricultural land use and crop coverage within a 1-km buffer of a child’s residence as proxies of pesticide applications or exposures to pesticides within a ½ mile buffer of child’s residence in relation to childhood cancers; such individual-level measures might be more sensitive to changes in location. Compared with these methods to estimate agricultural pesticide exposures near residences, our GIS-based system that integrates California’s unique Pesticide Use Reporting database and land use maps in California estimates children’s early life exposures at a finer resolution, but may be subject to more misclassification due to residential mobility. For the purpose of this study, we identify individuals’ exposures in early childhood using a 2-km buffer. The objectives of the present study are to assess patterns of mobility and identify maternal and child characteristics that may predict residential mobility in early childhood, and examine the impact of mobility on early childhood exposure measures for agriculturally applied pesticides and childhood cancers in California.To examine the associations between maternal and child characteristics and the cases’ likelihood of moving between birth and diagnosis, we conducted univariate and multivariate logistic regression analysis and estimated odds ratios and 95% confidence intervals . Based on previous literature , we considered factors that potentially influence mobility in pregnancy or early childhood including age at diagnosis , year of birth , maternal age at delivery , maternal race/ethnicity , maternal birthplace , maternal education , parity , rural/urban classification of residence at birth , and several socioeconomic variables including payment source for prenatal care as a proxy for family income and neighborhood level SES.

Our setting offers us an advantage because voters are profit-maximizing agricultural producers

As climate models become a primary tool for studying the atmospheric role of land surface processes, a question for current climate models is whether they can adequately distinguish and accurately simulate surface energy partitioning over different vegetation types. Plants contribute a large fraction of latent heat flux through evaporation of water from leaf surfaces and transpiration from deeper soil layers when stomata open during photosynthesis. Plants also affect net radiation by altering the surface albedo. A change in plant height can change the boundary layer turbulence by influencing surface roughness, and therefore the total energy exchange via latent and sensible heat fluxes [Davin and de Noblet-Ducoudre, 2010]. In most climate models, several important vegetation parameters are prescribed according to satellite observations and ground measurements. These parameters are not necessarily accurate at the site-scale due to the algorithm and validation methods used in retrieving satellite data or aggregating ground data [Yang et al., 2006]. Validation of surface fluxes over different vegetation types can help identify deficiencies in key parameters and model formulations to target for improving model performance. The aim of this work is to examine energy partitioning and surface climate simulated by a recently coupled regional climate model, WRF3-CLM3.5, for four major vegetation types across the United States, and to identify the model’s strengths and deficiencies to help prioritize model improvements. As the next-generation mesoscale numerical model,flower bucket the standard version of WRF includes relatively simple land surface schemes, which potentially constrain model applications for studying the land surface and ecosystem atmosphere feed backs.

The newly coupled model improved the surface process simulation in California [Subin et al., 2011], but has not been validated at the continental scale. We used the standard version of the Weather Research and Forecasting model version 3.0 [Skamarock et al., 2008], AmeriFlux site observations [Wofsy and Hollinger, 1998], and CERES data [Wielicki et al., 1996; Young et al., 1998] to evaluate energy flux partitioning. We analyzed the bias in surface climate variables by comparing to PRISM datasets [Di Luzio et al., 2008]. We focused on four dominant vegetation types with adequate representation in the AmeriFlux network . Both WRF and CLM have deficiencies that should be resolved in future versions to reduce the warm bias. The large warm bias in the standard version of WRF suggests there are problems in WRF. For example, the downward solar radiation bias contributes substantially to the warm bias based on a one-year sensitivity test that artificially reduced downward solar radiation by 30% . Reducing downward solar radiation is not simple because it is associated with many factors. Previous work [Markovic et al., 2008; Wild et al., 2001] suggests the overestimate of downward solar radiation at the surface could be either due to less cloud cover for cloudy days or less sky absorption of downward solar radiation for clear days. Ignoring aerosols in the model may also contribute to excess downward solar radiation [Wild, 2008]. The negative precipitation bias in the Midwest suggests that an underestimate of cloud cover may contribute to excess downward solar radiation in the Midwest. Further validation with WRF3-CLM3.5 focusing on the cloud cover and clear sky absorption are strongly encouraged but are beyond the scope of this paper. Fortunately, the newer WRF3.2 includes boundary layer physics and microphysics that could improve the overall simulation. A one-year sensitivity test using standard WRF3.2 with the MYNN boundary scheme [Nakanishi and Niino, 2009] and Thompson microphysics scheme [Thompson et al., 2008] showed a reduction in the downward solar radiation by 30 W m-2 , in T_max by 3K, and in T_min by 2K. With respect to CLM, the large warm bias in the Midwest could be due in part to 1) the missing irrigation scheme, and 2) the lower crop leaf area index used in the model.

A large area in the Midwest is covered by irrigated agriculture according to global irrigation maps [Siebert et al., 2005]. Without an irrigation scheme, WRF3-CLM3.5 may overestimate temperature by 3-5K in summer in the Midwest [Lobell et al., 2009; Sacks et al., 2009]. Considering the strong coupling between soil moisture and precipitation in the Midwest [Koster et al., 2004], low soil moisture could reduce cloud cover and enhance the downward solar radiation, further heating the land surface and contributing to a positive feedback in this region and producing a large warm bias. Also, the much lower maximum leaf area index used in the model could reduce LE and therefore increase H and near-surface temperature. The simulated seasonal variation in LAI is much lower than the direct measurements at the Bo1 site [Wilson and Meyers, 2007]. Mandatory marketing organizations have been an important agricultural policy tool in the United States for 80 years. These organizations include agricultural marketing orders, commissions, councils and check-off programs. They can serve many purposes, including supply control , setting of quality and grading standards, market or production research, limiting of unfair trade practices and generic commodity promotion. The intended purposes differ across MMOs and are outlined in each organization’s governing documents. The creation of a MMO is a political process, and considerable discretion is given to the Secretary of Agriculture in determining the value of these MMOs. In addition, many MMOs require approval through a vote of eligible producers for creation, continuation and termination. The outcome of this referendum then informs the Secretary’s decision for the future for the organization, and, in some cases, may dictate it. Producer referenda are held at regular intervals for most MMOs to ensure they are continuing to provide positive net benefits to producers. These referenda are the focus of our research. Specifically, we examine how market power among agricultural producers relates to voting power in a referendum to form, terminate or continue a MMO with a generic promotion provision. These referenda are interesting for several reasons. First, the voting rules used in determining the outcome of these referenda often depend on both the number of producers and the quantity of output they produce, suggesting that market structure matters in determining the outcome. Second, they provide an opportunity for us to study grower behavior regarding mandatory collective action organizations, which can shed light on the costs and benefits to growers of these organizations as well as on attitudes to collective action more broadly.

One of the most common types of MMOs is a marketing order. Although voting rules differ somewhat across MMOs, for the purposes of this paper, we consider voting rules for Federal marketing orders, as they are typical of the type of voting rule used by many MMOs. We model the supply side of the market for a homogeneous agricultural commodity as consisting of a single firm with a cost advantage and multiple firms with heterogeneous higher costs. This cost structure is intended to represent the supply environment in many industries,square flower bucket where there is an increasing gap between a few dominant producers and many smaller ones. We assume buyers of the commodity do not exercise market power. Finally, we focus on demand-increasing generic commodity promotion as the means by which an MMO benefits producers. Generic promotion is increasingly one of the primary roles of MMOs in the United States, in part due to the passage of the Research and Commodity Promotion Act of 1996, which created of a new category of federal check-off program with a major emphasis on generic promotion.1 We focus on a pair of voting rules commonly used together for Federal marketing orders and examine the voting power of the dominant and fringe firms. For this analysis, we consider what Felsenthal and Machover call “I-power.” This class of power measure address a voter’s influence over the decision to be made. The power measure we use is Banzhaf Power . The Banzhaf Power Index is calculated by considering all possible “winning” coalitions of voters—those coalitions that could pass a proposed action if all members of the coalition favored it, given the voting rule. A voter is “critical” if the coalition would no longer pass the proposed action if the voter left that coalition. The Banzhaf Power Index is defined as the number of times a given voter is critical out of the total number of possible vote combinations in the industry. Running simulations of these markets under various assumptions about costs and market structure and industry-calibrated market parameters, we calculate the Banzhaf Power Index value and market share of each firm. The Banzhaf Power Index assumes implicitly that voters vote for the action with a probability of 0.5. Some have challenged the usefulness of this type of measure given this naive assumption about voter behavior . However, developing a better model requires more information about voters, which can be hard to obtain. In most situations, voter preferences or correlations between preferences are difficult to measure and the factors that underly them may be challenging to identify.This voter characteristic allows us to incorporate our knowledge of theory of the firm to better predict the voter behavior given cost and market parameters.

Building on the probability theory approach to Banzhaf’s index identified by Straffin , and the behavior of profit-maximizing producers in the neoclassical theory of the firm to develop a second measure we call “Feasible” Banzhaf Power. To calculate this measure, we incorporate the information about each producer’s profit-maximizing voting choice and then assume that a producer votes in accordance with his profit-maximizing choice with a randomly drawn idiosyncratic probability. This probability represents the probability that a producer is optimizing some objective function other than profit-maximization that yields him or her to a different voting choice. For example, the producer could have the objective of minimizing government intervention, regardless of its effect on his profits. We find that market power and cost heterogeneity do indeed matter in determining the voting power of producers in MMO referenda, whether or not producer behavior is incorporated. Furthermore, incorporating information on producer behavior substantially affects our estimate of voting power. We also find that disparate preferences of firms with heterogeneous costs in situations where some producers wield market power can reduce the Feasible Banzhaf Power Index value for the low cost firm, even when the low cost firm produces a substantial share of industry output. Finally, we find that the different voting rules faced by producers in MMO referenda yield distinct differences in voting power in markets with heterogeneous producers. Our contributions to the literature are threefold. First, we contribute to both the voting power and agricultural economics literature, as to the best of our knowledge we are the first authors to examine marketing order referenda through the lens of voting power. Second, we contribute to the voting power literature by connecting the work on voting preferences and empirical voting power measures to the neoclassical theory of the firm in the form of our Feasible Banzhaf Power Index. This index is useful in that it incorporates information about behavior to provide a more realistic measure of voting power in settings with firms in the role of voter. And finally, through the analysis of voting power measures, we provide new insights about the potential challenges agricultural producers face in adapting their MMOs in a rapidly changing economic landscape. As agricultural market structures have changed over time, agricultural economists have moved away from the long-held assumption of perfect competition in some agricultural settings. Our work shows how the marriage of voting power methodology and agricultural economics can shed new light on how market power and cost heterogeneity interact with agricultural institutions in changing markets. The remainder of the paper proceeds as follows. In Section II, we give a brief history of MMOs and discuss the relevance of our work to agricultural policy. In Section III, we relate our work to the relevant literature in agricultural economics and political science. In Section IV, we present our theoretical model. In Section V, we discuss our simulations and calibration methodology. Section VI includes the presentation and discussion of our results, and Section VII concludes. MMOs were first authorized at the federal level by the Agricultural Adjustment Act of 1933 and the Agricultural Marketing Agreement Act of 1937. Initially, they were a policy response to ongoing low and volatile returns to agriculture in the 1920s and 1930s.

Two key challenges facing future attempts to bypass sensitive ecosystems emerge

A 65 km stretch of the San Joaquin River upstream of Mud Slough was delisted in 2010 . On the other hand, selenium loads in the 10 km stretch of Mud Slough through which selenium-rich drainage is being delivered from the San Luis Drain to the San Joaquin River have increased since the start of the GBP and usually exceed 5 µg/L . This rise in concentrations is likely to endanger local sensitive species including juvenile Chinook salmon and Steelhead trout . For the Chinook salmon in particular, seasonally elevated selenium concentrations in the stretch of the San Joaquin River between the confluence with Mud Slough and the confluence with the Merced River may prove problematic. Selenium concentrations have exceeded 10 µg/L in 6 of 24 months during the most recently published monitoring period, typically during the rainy season, i.e. between September and February . The seasonality of these peaks coincides with the emergence of the Chinook salmon’s sensitive juvenile live-stage, and the concentrations are in a range were increased mortality of up to 20% can be expected for juveniles . Thus selenium input through Mud Slough in this particular stretch of the San Joaquin River may represent an obstacle for ongoing efforts to restore salmon above the Merced, where they have been extirpated due to water diversions . Additionally, whereas selenium concentrations in most of the marshes have decreased, the 2 µg/L criterion was still exceeded in parts of the wetlands as recently as 2002, due to high flow input originating from the Delta-Mendota Canal , which was not captured by the bypass . As a result,procona London container the Grassland marshes listed in 1988 remain on California’s 303 list of impaired waters today .

First, caution needs to be exercised in preventing ecological damage in locations to which seleniferous runoff is being diverted. Second, circumvented locations may still receive selenium inputs due to other sources in the watershed. Consequently thorough monitoring of both circumvented and receiving water bodies is essential, especially during periods of high flow.In 2010, USGS scientists Theresa Presser and Samuel Luoma completed an ecosystem-scale selenium modeling effort in support of site-specific fish and wildlife criteria development for the San Francisco Bay and Delta . In brief, the model consists of three key components. First, the partitioning between dissolved selenium concentrations and the “particulate/planktonic” concentrations at the base of the food web is simulated using site-specific partitioning coefficients . Second, the local food web is resolved around target predator species of concern, including separate compartments for prey species or groups of species these predators feed on that differ significantly with respect to their selenium accumulation potential. Finally, the model comprises trophic transfer factors that correlate the concentration in the tissue of each species or trophic compartment with that of its diet. The model thus allows the calculation of tissue concentration estimates for all species that are part of the food web by simply multiplying dissolved concentrations by the Kd and then by the TTF for each lower connecting link in the food web. The model can also be used to convert tissue concentration limits for a target species to limits in solution. This approach is unique in that it captures critical trophic transfer steps relevant to the toxicological effects of selenium on individual species, while being generic enough to remain applicable to a wide array of aquatic ecosystems with diverse biogeochemical conditions. The main requirement is the availability of site-specific field data on the partitioning between dissolved concentrations and those at the local base of the food chain as well as information on the local food web, and on trophic transfer factors for key species present at a location.

The specific application of this approach to the San Francisco Bay and Delta lead to the capability of realistically translating tissue-based criteria for the protection of desired fish and bird species into dissolved or particulate concentration limits . In addition, the model was used to predict the ecosystem impacts of selenium in the Bay and Delta under various management scenarios . The California office of the EPA is now in the process of developing site-specific selenium criteria for the Bay and Delta based on the model results. In the future, the same modeling approach is to be used to develop site-specific criteria for other Californian ecosystem in which problems with selenium contamination occur . Such regulation would allow the protection of the most sensitive ecosystems without imposing an unnecessary regulatory burden in areas with less sensitive ecosystems and also the targeted protection of critical species. This approach would thus represent a landmark in the regulation of aquatic contaminants in the US and a significant improvement over regulation that traditionally has taken the form of state or nationwide criteria based on dissolved or acid soluble concentrations. Whereas the use of the model to determine site-specific dissolved selenium criteria or TMDLs is well warranted, its application to predict ecosystem impacts under changing environmental or management conditions may be limited to environments with relatively stable bio-geochemical conditions. The reason is that the model does not explicitly account for selenium speciation nor does it separate environmental compartments such as sediments and the water column. Transformation between chemical species and transfer between environmental compartments are far more dynamic and less linear than transfer through food webs and thus necessitate a dynamical model for adequate representation.

Such transfer could be of great importance in predicting selenium exposure under changing environmental conditions, especially for shallow stagnant water bodies with a high sediment-water interface to volume ratio, such as the Salton Sea. Fortunately, a dynamic model extension for water-sediment interactions could be integrated with Presser and Luoma’s approach, since mathematically this approach represents a simple multiplication of concentrations by partitioning and trophic transfer factors. Without such extensions, the models predictive power is limited to the geochemical steady-state and site-specific regulatory limits derived from it need to be coupled to ongoing monitoring and periodic revisions.The last three decades have seen significant progress with respect to the management and regulation of irrigation-induced selenium contamination in California. Among local remediation methods, sequential drainage reuse ending in well-designed evaporation facilities can be a viable option limited primarily by the scalability of the operation cost and the disposal of produced salts. Much can be learned from the integrated approach pursued as part of the Grassland Bypass Project . In particular, the project provides a blueprint of the framework necessary to establish and enforce load limits in an agricultural non-point context. As long as the means to track discharge quantities and concentrations are available, this approach can be translated to other agricultural sources of selenium or other pollutants . Given that a majority of selenium load reductions to date have been achieved by a reduction in drainage loads rather than selenium concentrations, there appear to be opportunities for additional reductions through management practices that enhance selenium retention in the source soils. Recent research suggests that unexplored options remain in this area, such as the management of soil structure to enhance microbial selenium reduction , or the addition of organic matter amendments to enhance reduction, retention, and volatilization . A better understanding of the factors controlling selenium speciation in soils would also help evaluate the long-term sustainability of drainage reuse schemes. The site-specific regulation currently under development for the San Francisco Bay and Delta represents an appropriate and timely update to federal selenium water quality criteria, which as discussed in the background section,cut flower transport bucket have proven inadequate in light of scientific findings over the last two decades. In this context, scientists and resource managers should think ahead about the needs that will arise as the approach is expanded to other sites of concern with respect to selenium contamination. California’s 303 list of impaired waters currently includes more than 60 water bodies polluted by selenium . By far the largest of these is the Salton Sea with an estimated 944 km2 affected . Like the Kesterson Reservoir in the 1980s, the Salton Sea has long been the receiving body of seleniferous irrigation drainage . It is also one of the most important bird habitats in the American Southwest, used by hundreds of thousands of waterfowl pertaining to resident and migratory bird species, including endangered ones like the brown pelican .

Whereas the Salton Sea is an obvious target for the expansion of the site-specific regulatory approach, matters may be complicated by the Sea’s uncertain management future . For example, management choices that expose sediment to oxic conditions may lead to the local release of reduced selenium accumulated in sediments, creating ecological hazard beyond that due to ongoing seleniferous irrigation-drainage inputs. In this case, the development of site-specific selenium criteria would need to be coupled to a detailed understanding not only of the trophic transfer processes in the local food web, but also the local bio-geochemical transformations in the shallow basin. Thus, the promise of the site-specific approach to the regulation of selenium as contaminant creates renewed urgency for the improvement of biogeochemical models of selenium cycling and the acquisition of field data at sites of concern.Within the physically complex matrix of a soil, microbial reduction is dictated by the local chemical conditions and thus subject to the soil’s physical, chemical and biological heterogeneity. Aggregates, which are mm to cm sized structural units of clay, silt and sand particles bound by roots, hyphae, and organic matter , represent the smallest systems in which the spatial coupling of transport with biogeochemical reactions can be studied on a well-defined scale . While advective solute transport is prevalent in the inter-aggregate macropores, transport in the intra-aggregate micropores is dominated by diffusion . In conjunction with local microbial metabolic activity this often leads to the formation of strong chemical gradients within aggregates. The importance of aggregate-scale heterogeneity in particular for local redox levels has long been recognized . Full anoxic to oxic gradients have been observed within aggregates as small as 4 mm in diameter . Tokunaga et al. showed that anoxic microzones within flat synthetic soil aggregates are likely to support localized sites of Se reduction and documented transport-controlled reduction of soluble Cr to solid Cr taking place exclusively within the surface layer of natural soil aggregates immerged in a Cr solution . Pallud et al. recently investigated ferrihydrite reduction in anoxic flow-through experiments utilizing novel artificial aggregate systems that closely mimic field transport conditions in structured soils and found strong radial gradients in secondary mineralization products as a result of mass-transfer limitations. Utilizing the same artificial aggregate systems, Masue-Slowey et al. investigated arsenic reduction and release in artificial aggregates surrounded by oxic solution and found through reactive transport modeling that the development of an anoxic region within the aggregate best described their experimental results. Given the valuable insights that these novel aggregate reactors systems have shed on the dynamics of iron and arsenic redox chemistry at the aggregate-scale an application to selenium reduction appears consequential. The dynamics of selenium cycling at the aggregate-scale are expected to differ drastically from those investigated so far in these systems, since unlike arsenic, selenium is an example of a contaminant that can be reductively immobilized from solution in soils. In this study I present data on selenium reduction from a series of flow-through reactor experiments utilizing these novel aggregate reactor systems that mimic the dual porosity of structured soils with a microporous artificial soil aggregate contained in a flow-through reactor macropore. Our guiding hypothesis was that aggregate-scale transport coupled to microbial selenium reduction will lead to systematic spatial concentration gradients within aggregates. Similarly to what has been observed for iron minerals and arsenic , we expected the Se reduction rates and emergent gradients to depend on the bulk chemical concentrations of carbon source and electron acceptor, aeration conditions, microbial activity, and the presence of sorptive phases in the solid matrix of aggregates. Our objective was thus to assess the impact of these factors on aggregate-scale selenium reduction and transport as well as to characterize emergent chemical gradients. Aggregates were made of sand or ferrihydrite-coated sand, to assess the impact of sorption on selenium reduction and transport, and initially contained a homogenous distribution of either Thauera selenatis or Enterobacter cloacae SLD1a-1 as model selenium-reducer.

Unions found it hard to organize workers brought to farms by intermediaries

In the first scenario, when propanil is no longer applied in the buffer zones and no other herbicide is used to replace it, we found that total revenues in Butte and Colusa counties would decline by $1.68 million, and that net revenues would decline by $1.58 million, assuming that the price of rice does not increase in response to the 0.4% decrease in production . In the second scenario, with lambda-cyhalothrin ground-applied before planting instead of being aerially applied, and assuming 15% and 23% yield losses as explained above, we found that total revenues in Butte and Colusa would decline by $5.75 million, while net revenues would decline by $4.66 million. Again, this assumes that the price of rice does not change in response to the reduction in quantity of rice produced. The combined revenue losses of the draft regulations, due to changes in application of both propanil and lambda-cyhalothrin, would be a $7.43 million loss in total revenues and a $6.25 million loss in net revenues for Butte and Colusa counties. Our analysis indicates that the draft regulations will likely have a substantial negative impact on California rice growers in Butte and Colusa counties, with a decrease in total revenues of $7.4 million and a decrease in net revenues of $6.2 million, if rice prices do not shift because of the decreases in production. The results change substantially if price is allowed to increase in response to a reduction in quantity of rice in California. However,10 liter drainage collection pot this is not a very realistic scenario given that rice prices are greatly influenced by world market prices, California only accounts for about one-fifth of U.S. rice production, and the United States is active in the international rice market.

The magnitude of the predicted revenue losses can be accounted for by the fact that there are no ideal substitutes for propanil and lamda-cyhalothrin, the large expected yield losses due to weed and rice water weevil damage in untreated buffer areas, and a sizable amount of rice acreage is affected by the draft regulations. The price of the alternative treatment in comparison to the current treatment is unlikely to be a major factor because farmers will most likely leave buffers untreated with herbicide and switch to ground applications of lamda-cyhalothrin, which involves a negligible increase in per-acre costs. Additionally, because lamda-cyhalothrin also controls tadpole shrimp, another pest of seedling rice, early pest management may become more expensive. Due to the very high share of fields affected, additional management costs due to the regulations, which are not estimated here, could be substantial. Even if the additional management costs under the draft regulations would be only $100 per field, this would lead to additional revenue losses of $470,000.The federal National Labor Relations Act excludes farm workers. California is the only major farm state with a state law that grants union rights to farm workers, establishes election procedures under which workers decide whether they want to be represented by unions, and remedies unfair labor practices committed by employers and unions. The Agricultural Labor Relations Act was enacted in 1975 after a decade of strife, as the fledgling United Farm Workers union challenged farm employers and the Teamsters for the right to represent farm workers. Experience during the late 1960s, with farm employers sometimes selecting the Teamsters to represent their workers without elections, led to provisions in the ALRA. These allowed the Agricultural Labor Relations Board to recognize a union as the bargaining representative of farm workers only after workers vote in secret-ballot elections. After the ALRA went into effect in Fall 1975, there were over 100 elections a month on the state’s farms, and it appeared that many of the state’s farm workers wanted to be represented by unions. Between 1975 and 1977, Figure 1 shows there were almost 700 elections on California farms, and unions were certified to represent workers on two-thirds of the farms involved .

Unions on most large vegetable farms and many of the largest fruit farms were expected to transform the farm labor market by raising wages and obtaining benefits such as health insurance and pensions for the seasonal workers they represented. After pushing entry-level wages in lettuce contracts to twice the minimum wage, Business Week on March 5, 1979 predicted that the United Farm Workers would help seasonal farm workers “to win wage parity with industrial workers.” The UFW became a major force in state politics, and sued the University of California to stop the use of taxpayer funds to support labor-saving mechanization research. Union organizing slowed to an average of 30 elections a year in the 1980s, and the share of elections that resulted in a union being certified to represent workers fell to 55%. Unions or workers can request secret-ballot elections, and during the 1990s requests fell to an average of 10 a year, with unions winning half. In the first decade of the 21st century, the average number of elections fell to seven a year, and many involved workers trying to decertify the union representing them. In some years, the UFW requested no elections to win certification to represent more workers, and was decertified at farms including L.E. Cooke, Vista Vineyard, and Henry Hibino. Over 15 organizations have been certified by the ALRB to represent workers on California farms, but today, three major unions represent most of the farm workers covered by contracts. The best-known union, the UFW, reported 4,300 active members to the U.S. Department of Labor at the end of 2010, and 2,500 active participants in its Juan de la Cruz pension fund; that is, workers on whose behalf employers made pension contributions sometime during the year. Teamsters Local 890 represents several thousand workers employed in the Salinas area, while United Food and Warehouse Workers Local 5 represents workers in the Salinas areas and at several wineries and dairies around the state. The UFW does not have local unions.There are four major explanations for why farm worker unions have been unable to represent more California farm workers and transform the farm labor market. The first involves flawed union leadership, especially of the UFW. Journalist Miriam Pawel praised UFW leader Cesar Chavez as a charismatic leader, able to articulate the hopes and dreams of farm workers, but concluded that Chavez was unwilling to turn the UFW into a business union that negotiated and administered contracts.

Chavez seemed more interested in using the UFW to achieve broader social change than in organizing more farm workers who might challenge his leadership. The second explanation involves state politics. Democratic governors made key appointments to the ALRB between 1975 and 1982, Republicans between 1983 and 1998, Democrats between 1999 and 2004, Republicans between 2005 and 2011, and Democrats since. Sociologists Linda and Theo Majka concluded that the ability of farm worker unions to organize and represent farm workers in the 1970s and early 1980s depended on which political party made appointments to the ALRB. Since then,plastic gutter arguments about political interference with the ALRB have diminished. The third explanation deals with changes in the structure of farm employment. Farm worker unions were most successful in the 1960s and 1970s with farms that belonged to conglomerates with brand names that made them vulnerable to boycotts, including Seven-Up, Shell Oil, and United Brands . During the 1980s, many conglomerates sold their California farming operations to growers who were more likely to hire farm workers via intermediaries such as custom harvesters and farm labor contractors.The fourth explanation is rising unauthorized migration that added to the supply of labor, making it hard for unions to win wage increases. Figure 2 shows that the number of deportable aliens located, mostly foreigners apprehended just inside the MexicoU.S. border, was rising when unions had their maximum impacts on wages. This occurred between the mid-1960s and the late 1980s, after the Bracero program ended and before unauthorized migration increased in the 1980s with recession and peso devaluations in Mexico. By the mid-1980s, when apprehensions rose to almost 1.8 million a year, unions found it hard to organize workers fearful of being discovered by Border Patrol agents. It was also difficult to win wage and benefit increases after they were certified to represent workers because newcomers from Mexico were flooding the labor market.

Farm worker unions acknowledge their difficulty organizing and representing farm workers, and hope for federal and state legislative changes to restore union power. Their primary federal goal is enactment of the Agricultural Jobs, Opportunity, Benefits and Security Act , a compromise negotiated with farm employers that would legalize currently unauthorized farm workers and make employer-friendly changes to the H-2A guest worker program. Unions believe that legal workers grateful to them for legal status would be easier to organize. However, AgJOBS is unlikely to be negotiated soon, prompting the UFW to urge changes to the ALRA. The UFW won an amendment to the ALRA in 2002 that guarantees a union contract within eight months, and another in 2011 that allows the ALRB to intervene after employers unlawfully interfere before a union election. Unions certified to represent farm workers want to negotiate agreements that set wages and benefits and protect the union as an institution by requiring workers to join the union and pay dues. There is no master list of contracts signed between farm employers and unions, preventing analysis on which union certifications failed to result in contracts. However, it is clear that most of the over 800 farms on which unions were certified to represent workers never had a union contract. Furthermore, unions were unable to renew contracts with many of the farms that signed first contracts. Unions tackled the difficulty of turning election victories into contracts with mandatory mediation in 2002, an amendment to the ALRA that should have been unnecessary. The ALRA includes a unique remedy to encourage employers to bargain in good faith with their certified union. If employers fail to bargain in good faith, the ALRB can order the employer to make employees whole for lost wages and benefits during the time that the employer failed to bargain. Unions led by the UFW argued that the make-whole remedy was not effective because of long lags between when an election is held and ALRB certification of the results. Employers often contest the ALRB’s certification decision in the courts and, by the time the employer is ordered to begin good faith bargaining, there may have been significant worker turnover and shifts in union priorities. Meanwhile, separate procedures to determine the amount of make whole owed to workers can take years, frustrating workers who expected wage and benefit increases soon after voting for union representation. Unions argued that such employer behavior discouraged worker interest in the benefits of collective bargaining. The California Legislature agreed, approving an amendment to the ALRA that allowed mandatory mediation if employers and unions are unable to reach a first agreement via good-faith bargaining. Since 2003, employers and their certified unions bargain for at least 180 days to reach a first contract . If they fail, either party can request help from a mediator for an additional 30 days of bargaining. If this mediated effort fails, the mediator can set the terms of an agreement that the ALRB can impose on the parties. Mandatory mediation, which aims to ensure that unions get first contracts quickly, was denounced by growers as a perversion of collective bargaining, whose goal is to allow the parties closest to the workplace to negotiate wages, benefits, and working conditions. Fears that unions would frequently invoke mandatory mediation, to try to gain via mediation what they could not win at the bargaining table, prompted limits on how often it could be invoked; no union could request mediation more than 75 times between 2003 and 2007. This limit proved unnecessary. Mandatory mediation has been invoked seven times in nine years. In two cases, Hess Collection Winery and Boschma and Sons Dairy, a mediator imposed a collective bargaining agreement; in two others, Bayou Dairy and Frank Pinheiro Dairy, the employer went out of business. In Pictsweet, Valley View Farms, and D’Arrigo, the parties reached a collective bargaining agreement during the mediation process.

Animal intrusion into fresh produce fields causes significant agricultural losses each year

We present a novel approach that uses multiple linear regression to combine the CPU temperature from nearby SBCs and remote weather stations, to estimate the temperature at outdoor locations that do not have temperature sensors. We use sensor data to train and test multiple regres-sion models. We investigate the efficacy of using different smoothing techniques and we account for the computational load of SBCs at the time of measurement and data collection. We find that our approach enables a prediction error that is less than 1.5 degrees Fahrenheit, while past work results in errors of 1–14 degrees Fahrenheit for similar datasets. We integrate sensor synthesis into Hypatia and use it to facilitate automatic and scalable model selection, as well as visualization for different data sets and recommendations. Finally, we developed a new approach to distributed scheduling for analytics applications in IoT settings: sensor-edge-cloud deployments. Our scheduler takes advantage of remote resources when available, while fully utilizing local edge systems, as it optimizes for time to completion for applications and workloads. The scheduler uses remote resources only if doing so reduces the latency of providing actionable insights locally. The scheduler uses histories of both computation and communication time the applications, which it uses to construct a job placement schedule that minimizes application response latency . Hypatia then uses this schedule to automatically deploy workloads across edge systems and cloud computing systems. We empirically evaluate Hypatia using both clustering and regression services and show that it is able to achieve better end-to-end performance than using the edge or cloud alone. The result is the first end-to-end system that fully utilizes edge computing resources as it serves the needs of precision agriculture.

It does so by accounting for resource constraints at the edge, the lack of or intermittent connectivity to the public cloud,pots with drainage holes and the expense of transmitting the data to/from remote cloud systems. Moreover, the system is open-source and integrates a wide range of analytics, scoring methods, and visualization tools, which can be easily extended with new and emerging techniques. By doing so, we enable others to easily build upon, extend, reproduce, and compare it to our work in the future. Moving forward, we hope to encourage adoption of Hypatia by growers, farm consultants, and data analysts interested in taking advantage of the locality of edge systems to provide low latency analytics. Given the existing infrastructure, we plan to add new sensors, develop more synthesized, sensors, and to integrate additional analytics and scoring methods. Specifically, we plan to extend Hypatia with support for image classification and to use analytics accelerators at both the edge and in the cloud when available. Other future work includes investigating new data sources and machine learning algorithms that inform a more refined scheduling algorithm that can take advantage of even more granular resources. In addition, Hypatia error analysis can benefit from additional abstractions that account error propagation, which has the potential for making the results and recommendation more informative and trustworthy.Fragmentation of natural habitats during conversion of wild lands to agriculture and the subsequent increase in agrochemicals has resulted in a loss of biodiversity and a deterioration of ecosystem function, including natural pest control. Non-crop habitats harbor natural enemies to crop pests . Such habitats also harbor beneficial songbirds that consume insect pests , and provide perching sites for raptors that deter avian and rodent pests . Balancing the role of agricultural lands in providing habitat for biodiversity while simultaneously avoiding bird damage and reducing food safety risks is the primary goal behind the concept of co-management, which is recommended by the Food Safety Modernization Act .

Wild and domestic animals destroy crops by eating and trampling them, and can pose food safety risks due to the deposition of potentially contaminated feces on or near the crops . Birds are one of the most challenging animals to keep out of agricultural fields, and they may harbor food borne pathogens. For example, European starlings are a source of Salmonella enterica at concentrated animal feeding operations , posing a greater risk of pathogen transfer than other variables like cattle density, facility management operations, and environmental variables . They also may be a significant source of other Salmonella spp, Escherichia coli O157, and other shiga toxin–producing E. coli . During a study at a CAFO in southern Arizona, 103 birds were tested for food borne pathogens. Two tested positive for Salmonella, and five tested positive for non-O157 Shiga toxinproducing E. coli. Other studies have shown similar results as documented in a review by Langholz and Jay-Russell where they listed 23 studies on food borne pathogen prevalence in birds, including positive results for ducks, gulls, starlings, and pigeons. A more recent review listed food borne pathogens specifically transmitted by wild birds . All reviews discuss a 2008 outbreak of Campylobacter related to pea consumption because it was one of few outbreaks directly linking the pathogen to a wildlife source, in this case, sandhill cranes . This highlights the potential risks to food safety associated with migratory birds. Damage and food safety risks from wildlife activities remain significant economic problems despite the use of a variety of methods to control bird and rodent pests . Yield loss and economic impacts vary by crop and region, but can be a substantial burden on growers . Growers of fresh produce try countless methods to deter birds. These deterrents fall into nine general categories . This paper is not intended to be an exhaustive review of bird deterrents, but instead we present an overview of the ones most used in the field, as well as methods that utilize multiple techniques in an effort to develop integrated pest management for nuisance bird control.

The array of visual bird deterrents is expansive, and includes lights that are flashing or rotating, searchlights, mirrors and reflectors, reflective tape, flags, rags, streamers, lasers, dogs, humans, scarecrows, raptor models, corpses, balloons with eye spots, kites, kite hawks, mobile predator models, and water dyes or colorants . All of these methods work to some degree for a short period of time until habituation. Lasers that were used to disperse crows, for example, resulted in an initial dispersion, but crows reoccupied their roosts the same night that the lasers were used, and none of the roosts were abandoned . Kite balloons were shown to be effective in the short term, but birds quickly become habituated, reducing the effectiveness over time . Similarly, balloons with eye spots have been used in an attempt to reduce damage to vineyard grapes in New Zealand, but growers reported no economically significant effect . Generally, balloons, scarecrows, hawk kites, and reflective tape work best with sound cannons or netting, described below . Noise deterrents are generally effective, but much like visual deterrents, birds easily become habituated to them,drainage planter pot decreasing their efficacy over time. They have the added issue that growers who use them are subject to complaints of nuisance noise from neighbors . Propane sound cannons are the most commonly used noise deterrent, but they need to be repositioned weekly and set to go off randomly every 7-20 minutes during daylight hours for the greatest effect. Since sound cannons usually make a hissing noise before sounding off, they give birds a warning to leave the area, and then they return after the explosive noise. Some of the other common noise deterrents include bangers, screamers, squawkers, whistlers, scare cartridges, and noise bombs . Even human presence can be used as a noise deterrent if they rattle cans, crack whips, yell, honk horns, or shoot guns . Human activity can be very effective at keeping nuisance birds out of fields when fields are small enough to drive or walk around, but it can be expensive to maintain a human guard on duty. Instead, some growers use synthetic sounds that offer unambiguous messages that elicit inter specific responses, like distress calls . They prevent habituation by varying the rhythm and number of signals emitted . In a study of alarm calls from crimson rosellas in orchards,researchers found that these birds were effectively deterred in the short- to medium-term . However, distress calls offer another challenge since they may be an invitation to nearby predators indicating that their next meal is ready. Broadcast units are a less expensive, more technologically advanced noise deterrent that reproduce accurate and effective birds calls that significantly reduce damage in vineyards .

Another moderately effective noise deterrent is the sonic net, which overlaps with the frequency range of bird vocalizations, making communication among a flock ineffective. When used at an airfield, researchers demonstrated an 82% reduction in birds in the sonic net area, and it remained effective after four weeks of exposure . Fencing is an effective non-lethal, long-term method used as a standard technique to minimize wildlife intrusion into agricultural lands . While fencing cannot be used to deter birds, netting can be. While noise deterrents used against juvenile starlings in a cherry orchard were shown to be ineffective, research suggests that the netting-in of an orchard would be more effective . However, while netting is the most effective method, it is also has some drawbacks. It is one of the most expensive methods for deterring birds due to the massive areas of crops that need to be covered . It can also be easily damaged, and it can be a hazard to wildlife. Other exclusions that are used with birds are electric fencing, overhead wires, and anti- perching devices, such as spikes, some of which are also considered tactile deterrents and forms of habitat modification described below. The concept of habitat modification to deter nuisance birds includes a wide array of activities, from providing better quality forage or shelter in alternate locations through lure crops or sacrificial crops to simply removing roost structures, food, and shelter, forcing birds to go elsewhere. In many cases, deterring nuisance birds from one field causes them to negatively impact neighboring farms. For that reason, Ainsley and Kosoy propose collective action on the part of neighboring farmers in which communal feeding plots are constructed to protect the fields of all farmers in a single area, thereby evenly distributing crop losses and maintaining stable bird populations in the ecosystem . The USDA’s Wildlife Services attempted this method when they began to cost share eight hectare Wildlife Conservation Sunflower Plots with sunflower growers to lure migrating blackbirds away from commercial sunflower fields. The targeted blackbirds ended up removing 10 times more sunflower seeds from the WCSP than from commercial fields, making this strategy an important part of an integrated pest management plan for commercial sunflower growers . Monk parakeets tend to damage corn and sunflower fields that are closest to man-made structures and adjacent trees, areas with tree patches around the crop fields, and areas with high availability of pasture and weedy and fallow fields . The removal of these landscape features that attract birds, like areas with structures for perching, breeding, and shelter, can cause birds to move out of an area . A recent study indicated that hedgerows harbor higher biodiversity of rodents, but that biodiversity does not spill over into wildlife intrusion into fields . While rodents differ from birds, the concept of wildlife utilizing adjacent habitat without affecting agricultural crops or impacting food safety is similar. Physiological methods of bird control include such things as chemo-sterilants, contraceptives, and immune- contraceptive vaccines . These are rarely, if ever, used by growers in agricultural areas because they require extensive permitting and veterinary oversight, often times making their use unfeasible. Linz, Bucher, et. al. identified four limiting factors hindering the use of contraceptive methods and lethal control of birds in agriculture, including: 1) the high cost of implementation combined with challenges related to maintaining long-term control of birds, 2) determining the population level in an area that would be considered acceptable and therefore serves as a level of success, 3) ensuring that the treatment would be directed only at the birds actually causing crop damage, and 4) managing immigration of non-treated birds. Chemical bird deterrents, such as taste and behavioral repellants, are expensive, difficult to apply, not as effective in the field as they are in the lab, need to be licensed, and some overlap with lethal deterrents.

A large state space of course requires more computational power than MZA

Clearly, there is a significant difference between the most common clustering and the best clustering in almost every case . Another possibility is that the rare clusterings having the best BIC scores may not correspond to geographically meaningful EC maps. That is, the best BIC may correspond to a statistically meaningful solution that does not provide insight for soil zone management. Figure 3.9 shows the geographic mappings of the best BIC clusterings from Table 3.8. We assign different colors to each EC data point based on the cluster to which it has been assigned and then graph the data points based on latitude and longitude. From the figures, it is clear that the clusterings correspond to feasible zone management maps. That is, points belonging to the same cluster are often adjacent in geographic space indicating a strong EC mapping relationship. To illustrate in greater detail, consider the CAP dataset results. CAP is a lemon field with soil consisting of clay, sandy-loam, and sandy-clay-loam. Figure 3.8a shows that jobs from Job-5 on have very stable results with similar BIC scores. Figure3.9a shows the best clustering from Job-2048 with 4 clusters with cardinality [2103, 500, 473, 156] and a BIC score of -8918.35. We compare this result with the most common clustering for Job-2048 that occurred 1445 times with a BIC score of -10169.7 and two clusters having 2169, and1063 elements respectively . The visual difference between those two clusterings shows that most of the “disagreement” appears along cluster boundaries. In addition, we consider how clustering results compare to the soil samples taken at the CAP field. Figure3.10c shows the core samples taken at five different locations and their soil type. Out of five core samples available,vertical greenhouse the top two in the figure belong to the same cluster in both the best and the most common clustering .

The other three core samples report clay in the lower left corner followed by sandy-loam and sandy-clay-loam-cl. In the best clustering, they all belong to different clusters while the most common clustering puts all three core samples in the same cluster . Thus, the best clustering corresponds more closely to a core-sample analysis than the most common clustering. Note that for each fixed set of parameters , we ran 1, 228, 800 different experiments. This is the maximum frequency that can occur for a particular value of k resulting in the best or most common BIC. Table 3.8 shows that the most common clustering usually has fewer clusters while the best clustering provides higher resolution and therefore additional information that a farmer may find useful for management.The analysis of the other datasets is similar. In each case, the best BIC score is rare, requiring a large number of repeated trials, each with a different initialization to determine. In all but one case the best clustering differs substantially from the most frequently occurring clustering. The best clusterings correspond to meaningful EC soil maps and those maps correctly register with soil core samples.We start by comparing Centaurus against MZA for the synthetic datasets. We use the number that both FPI and NCE scores report for MZA as the optimal number of clusters. We then use the respective cluster assignment to compute the error rates. Figure 3.11 shows the best assignments produced by Centaurus and MZA and Table 3.9 shows the percentage of incorrectly classified points in each dataset, for the same assignments. For MZA, the best assignment is achieved by Mahalanobis distance and for Centaurus the best assignment is achieved by Full-Untied. MZA clusters the Dataset-1 correctly and reports K = 3 as the ideal number of clusters . For Dataset-2, MZA correctly identifies K = 3 but has a higher error rate of 13.8 A possible reason for this is that MZA only considers a single initial assignment of cluster centers, which in this case converges to a local minimum that is different from the global minimum. Centaurus avoids this kind of error by performing several runs of k-means algorithm before suggesting the optimal cluster assignment. Dataset-3 consists of clusters with correlation across features.

Centaurus provides better results than MZA for this dataset, achieving a percentage error of only 0.1 A possible reason for this is that MZA employs a global covariancematrix and does not consider Tied and Untied options as Centaurus does, which results in better label assignments. Another limitation of MZA is that it uses a free variable, called the fuzziness parameter, and multiple scoring techniques. It is challenging to determine how to set the fuzziness value even though the results are highly sensitive to this value. For the results in this section, we chose the default fuzziness parameter of m = 1.3 as suggested by the author Odeh et al. . Furthermore, for the farm datasets, the MZA scoring metrics do not always agree, providing conflicting recommendation and forcing the user to choose the best clustering. In combination, these limitations make MZA hard to use as a recommendation service for growers who lack the data science background necessary to interpret its results. Centaurus addresses these limitations by providing a high enough number of k-means runs, no free parameters, and more sophisticated ways of computing the covariance matrix in each iteration of its clustering algorithm. It uses a unique scoring method to decide what is a single best clustering that will be presented to a novice user while it provides the diagnostic capabilities that are needed for more advanced users.Moreover, FPI and NCE disagree more often than they agree for these datasets. For the Cal Poly dataset both scores agree only when m = 1.5 suggesting that k = 4 is the best clustering. For other values of m, MZA recommends cluster sizes that range from k = 2 to k = 5. For Sedgwick and m = 2.0 , FPI selects k = 3 and NCE selects k = 2. For UNL, no FPI-NCE pairs agree on the best clustering, with MZA recommending all values of for different m. Because fine-grained EC measurements are not available for the Cal Poly, Sedgwick, and UNL farm plots, it is not possible to compare the MZA and Centaurus in terms of which produces a more accurate spatial maps from the Veris data. Even with expert interpretation of the conflicting MZA results for Cal Poly and UNL, we do not have access to “ground truth” for the fields.

However, it is possible to compare the two methods with the synthetic datasets shown in Figure 3.1. Note that this evidence suggests Centaurus is more effective for some clustering problems but is not conclusive for the empirical data. Instead, from the empirical data we claim that Centaurus is more utilitarian than MZA because disagreement between FPI and NCE differing possible best clusterings based on user-selected values of m, can make MZA results difficult and/or error-prone to interpret for non-expert users. MZA recommendations may be useful in providing an overall high level “picture” of the Veris data clustering,vertical grow towers but its varying recommendations are challenging to use for making “hard” decisions by experts and non-experts alike. In contrast, Centaurus provides both a single “hard” spatial clustering assignment and a way to explain why one clustering should be preferred over another and which one is “best” when ground truth is not available. In contrast, Centaurus is able to use its variants of k-means, a BIC-based scoring metric, and large state space exploration to determine a single “best” clustering. The only free parameter the user must set is the size of the state space exploration . As the work in this study illustrates, Centaurus can find rare and relatively unique high-quality clusterings when the state space it explores is large.MZA is a stand-alone software package that runs on a laptop or desktop computer. In contrast, Centaurus is designed to run as a highly concurrent and scalable cloud service and uses a single processor per k-means run. As such, it automatically harnesses multiple computational resources on behalf of its users. Centaurus can be configured to constrain the number of resources it uses; doing so proportionately increases the time required to complete a job . For this work, we host Centaurus on two large private cloud systems: Aristotle Aristotle and Jet stream Stewart et al. , Towns et al. . Extensive studies of k-means demonstrate its popularity for data processing and many surveys are available to interested readers Jain et al. , Berkhin . In this section, we focus on k-means clustering for multivariate correlated data. We also discuss the application and need for such systems in the context of farm analytics when analyzing soil electrical conductivity. To integrate k-means into Centaurus, we leverage Murphy’s Murphy work in the domain of Gaussian Mixture Models. This work identifies multiple ways of computing the covariance matrices and using them to determine distances and log-likelihood. To the best of our knowledge, there is no prior work on using all six variants of cluster covariance computation within a k-means system. We also utilize the k-means++Arthur & Vassilvitskii work for cluster center initialization.The research and system that is most closely related to Centaurus, is MZA Fridgen et al. —a computer program widely used by farmers to identify clusters in soil electro-conductivity data to aid farm zone identification and to optimize management.

MZA uses fuzzy k-means Dunn , Bezdek , computes a global covariance and employs either Euclidean Heath et al. , diagonal, or Mahalanobis distance to compute the distance between points. MZA computes the covariance matrix once from all data points and uses this same matrix in each iteration. MZA compares clusters using two different scoring metrics: fuzziness performance index Odeh et al. and normalized classification entropy Bezdek . Centaurus attempts to address some of the limitations of MZA . We also show that although MZA provides multiple scoring metrics to compare cluster quality, the MZA metrics commonly produce different “recommended” clusterings. The authors of x-means Pelleg et al. use Bayesian Information Criterion Schwarz  as a score for the univariate normal distribution. Our work differs in that we extend the algorithm and scoring to multivariate distributions and account for different ways of covariance matrix computation in the clustering algorithm. We provide six different ways of computing covariance matrix for k-means for multivariate data and examples that illustrate the differences. Different parallel computational models have been used in other works to speed up the k-means cluster initialization Bahmani et al. , or its overall runtime. Our work differs in that we provide not only a scalable system but include k-means variants, flexibility for a user to select any one or all of the variants, as well as a scoring and recommendation system. Finally, Centaurus is pluggable enabling other algorithms to be added and compared. The Internet of Things is quickly expanding to include every “thing” from simple Internet-connected objects, to collections of intelligent devices capable of everything from the acquisition, processing, and analysis of data, to data-driven actuation, automation, and control. Since these devices are located “in the wild”, they are typically small, resource-constrained and battery-powered. At the same time, low latency requirements of many applications mean that processing and the analysis must be performed near where data is collected. This tension requires new techniques that equip IoT devices with more capabilities. One way to enable IoT devices to do more is to use integrated sensors to estimate the measurements of other sensors, a technique that we call sensor synthesis. Since the number of sensors per device is generally bounded by design constraints, sensor synthesis makes it possible to free up resources in IoT devices for other sensors. We focus on estimating values of measurements where estima-tion error is low, freeing up space for sensors with measurements that are harder to estimate. Many, if not most, IoT systems for precision agriculture depend on and integrate measurements of real-time, atmospheric temperature. Temperature is used to inform and actuate irrigation scheduling, frost damage mitigation, greenhouse management, plant growth modulation, yield estimation, post-harvest monitoring, crop selection, and disease and pest management, among other farm operations Ghaemi et al. , Stombaugh et al. , Ioslovich et al. , Roberts et al. , Gonzalez-Dugoa et al. .

Qualitative data was analyzed through coding responses to specific questions

We do not know the fate of 15 ghost CSAs, as no definite statement of closure could be found and all contact attempts failed. Of the other 13, some left farming, some were still farming but without CSAs, and one moved out of state and continues to farm. Removing all of these from the study left 74 CSAs that met our definition. Primary data collection occurred from January 2010 to April 2011 and involved two components: a semistructured interview and a survey conducted through an online questionnaire. All 74 CSAs were contacted by phone and e-mail. Fifty-four CSA farmers and two CSA organizers, together representing 55 CSAs, agreed to participate in the study and were interviewed. In most cases, we interviewed the farmers directly responsible for the CSA operation, but two cases were different. In one, a CSA organizer worked with two farms to create an independent CSA; one of these farms also had its own CSA, while the other farm only sold through the CSA run by the organizer — these two farms and the organizer count as two CSAs. In the other case, the CSA organizer brought many farmers, none of whom have their own CSA, together to form one CSA. Forty-eight of the 54 CSA farmers interviewed completed the survey; the others did not after repeated reminders. We did not request survey responses from the CSA organizers. We used the qualitative data from farmers and the two CSA organizers who did not complete the questionnaire, but we were unable to include their information for most quantitative data.We analyzed the quantitative data by creating summary statistics of various characteristics, with some bivariate statistical analysis.

In the interviews, we asked CSA farmers about the prices for their CSA shares, how their CSA delivery systems worked,vertical hydroponic whether they bought supplemental produce from other farms, and the extent that they used volunteers on the farm; and in the survey, we asked about the types of food and other products in their shares, minimum payment periods and events hosted at the CSAs. As a result, CSA types emerged that differed from our original conception of a CSA — that members shared risk with the farm and paid for a full season up front. None of the CSAs had a formal core member group deciding what to produce, none had mandatory member workdays, and many did not require long minimum payment periods or share production risks with members.The membership/share model requires customers to make an upfront membership or share payment. It is rare; only four of the 48 CSAs operated this way. Two of the four CSAs used only the membership model; the other two combined it with the box models by offering member discounts. The membership payment is paid prior to actually picking up the produce. Members give the farmer some amount of money, which becomes credit for use at the farm’s U-pick, farm stand or farmers market stall. Members do not pick up a set amount of produce but are able to pick and choose, and receive a discount by paying in advance. With share payments, members can sign a contract to own a share of a farm animal, and the share payment covers the animal’s feed. The member then purchases that animal’s products. The member does not get any discount for their share but is able to gain access to locally raised and processed animal products, which are not widely available in the region. He or she is also sharing the risks associated with raising livestock with the rancher or farmer.These differently arranged enterprises, all called CSAs by their operators, demonstrate a central finding: Much innovation is occurring in how farmers and consumer members connect through a CSA.

Farmers have adapted the CSA model to their ambitions for their farm, to innovative products and to regional conditions. CSA farmers have different preferences for their operations. Some want to remain small, while others want to grow; these goals require different strategies. Farmers have added new products, especially meat and dairy, into their CSAs, although the processing of those products does not fit easily with handling practices developed for fruits and vegetables. Other innovations include changing CSA payment and delivery systems so that they are more attractive and accessible to people who are not familiar with the concept and to consumers who cannot afford a large upfront cost, both of which are important realities in the Central Valley. For example, 20% of CSAs in the study had no minimum payment period, allowing week-by-week payments, which extends membership to a broader population, including those hesitant or unable to commit to extended payments. Requiring no long-term commitment was also a common practice among meat CSAs in our sample, which often do not know exactly which products will be available and when, including both individual cuts and type of meats. This uncertainty stems from maturation, slaughtering and butchering processes. Few slaughter and butcher facilities serve small-scale producers. Consequently, CSA meat producers compete with large-scale operations for limited processing capacity, and there is greater variability in their animals’ maturation because they are raised primarily on pasture. Scheduling difficulties can result; for example, during the summer, CSA ranchers may need to schedule slaughtering months in advance, but their animals may not be ready by the scheduled date. Meat CSAs rely on committed customers who agree, typically on a monthly basis, to buy some amount of a variety of meat.To understand their economic viability, we asked CSA farmers about gross annual sales and net profits in 2009, the CSA’s contribution to the total economic activity of the farm, other marketing channels used and how the farmers valued their labor.

In the survey, we asked about whether partners held off-farm jobs and the CSA’s general profitability. We found that the CSA was a crucial direct to-consumer marketing channel for the small- and medium-scale farmers in our study. On average, the farmers obtained 58% of gross sales from their CSA. In general, small-scale farmers were more dependent on their CSA than larger-scale CSA farmers. Most farmers also sell into other channels, including wholesale and direct-marketing venues, especially farmers markets. Some farm-linked aggregator box CSAs act as wholesale outlets for small farms with their own CSAs. Farmers in our study commonly chose the CSA as a marketing outlet to diversify their income channels. Some had little access to organic wholesale markets, while others wanted to increase sales beyond farmers markets and other direct sales. Some newer farmers started with a CSA to help raise needed capital. As motivations for choosing a CSA, most respondents mentioned the advantages of knowing sales volumes in advance and being paid up front, before the growing season begins. Assessing the economic viability of CSA operations is difficult because it involves both the baseline profitability of the business and the need to generate sufficient income for retirement, health insurance, college for children, land purchases and so on. In addition, farmers conceptualize profit differently. Some consider their salaries as profit, while others set aside a salary for farm partners and consider profit to exclude this salary. Not all farmers amortize their accounting, and many reinvest surpluses in the farm to make it more productive or reduce taxes. Consequently, we asked a variety of questions about farm economics.When we asked farmers in the interviews why they wanted to do a CSA and the general philosophy behind their farm and CSA operation, most were not interested solely in maximizing sales,vertical farming supplies profit or their salaries. When asked about their motivations and farming philosophy, CSA farmers said they loved farming, felt satisfaction in providing fresh food to their communities and educating people about food and agriculture, and wanted to make positive change. As one farmer noted, “The world’s messed up, and we’re fixing it — one family at a time, one farm at a time” . Although that sentiment was common, CSA farmers’ political commitments ranged from libertarianism to socialism to evangelical Christianity to feminism. We also found a diversity of views on the CSA as a business: Many saw their CSA as promoting their deeply held values, independent of maximizing profit. For example, one newer CSA farmer said: “I really want to empower other women to work in sustainable agriculture . . . Almost all our applications for internships are from women, probably 75%, but there aren’t that many women farmers” . CSA farmers frequently mentioned receiving nonmonetary forms of compensation: tangible benefits such as living and/ or raising children on a farm, benefiting from improvements to the property, eating well and living healthfully; and intangible ones such as the lifestyle and deeply rewarding hard work. One farmer noted: “We don’t keep track of hours ‘cause that would be depressing from a pay standpoint. But we just love it. We probably should [do time tracking], but on the other hand, it’s part of the lifestyle. It isn’t jobby at all.

We have what we need to get by, but we don’t pay ourselves an official wage” . Some farmers in our study ran their CSAs to make money, although all did so within the context of broader social and environmental commitments. As an example, farmer 39A and farmer 39B, a husband and wife team, respectively said their philosophy for the CSA was to “make money to send children to college,” and “capitalism — you have to be greedy, grubbing capitalists.” However, they went on to illustrate their underlying environmental and social commitments. When farmer 39A said, “We always try to be the top of the market in terms of quality and price,” farmer 39B added that they value growing the “most nutrient-dense food [and] finding a supportive community to reward us for doing it.” Driving home the point that their  profit orientation is securely underpinned by a broader ethos, farmer 39A added, “We are also committed to offering our employees year-round employment in a toxic-free environment.”We asked many questions about the CSA farms, including survey questions about start year, farm size, area in various land uses, number and kinds of crops and farm animals, general practices in relation to the federal organic standard, electricity generation, farm inputs, water use and land tenure. In the interview, we asked open-ended questions, including “How did you get access to the land you’re currently using for your CSA?” and “What practices do you do that you think are most beneficial to the environment?” We found out that most CSAs in our study were relatively new, in existence for 5.7 years on average. CSA farms shared certain core features, especially a commitment to environmental conservation, agroecology and agrobiodiversity . The farms were diverse across a range of characteristics, including farm size, land ownership, organic certification and membership numbers.CSA farmers in our study cultivated a tremendous amount of agrobiodiversity, growing 44 crops and raising three types of livestock on average. Most CSAs studied focused on vegetables, although some were exclusively focused on fruit, one on grain, and a handful on meat and other animal products. About half of CSAs studied had livestock in 2009. The most common animals were layer chickens , followed by hogs and pigs , goats and kids and broilers, sheep and lambs, and beef cattle . Many CSA farms also had some land devoted to conservation plantings, such as hedgerows where birds and beneficial insects can live. As one farmer noted, “I have a very strong view that agriculture doesn’t need to and shouldn’t decrease the vitality, the biodiversity of the environment . . . [agriculture] can actually enhance it” . In the Central Valley, the CSA farmers’ commitment to agrobiodiversity contrasts with the monocultures that dominate the landscape. Agrobiodiversity is supported by the unique nature of CSAs. Many farmers noted that providing diversity in the box is a key strategy for maintaining CSA members, and that this had translated directly into diversity in crops and varieties on the farm. Regarding her CSA’s first member survey, one farmer noted that members wanted “more fruit and more diversity. We immediately planted fruit trees and told our members, ‘We are planting these fruit trees for you; wait 4 years for some peaches’” .

The financial value of publicly traded firms has also been shown to suffer from food scares

The risk of contamination, thus, typically includes the cost of withdrawing product from the market and lost revenue from a portion of harvest. Larger operations, therefore, have greater risk from contamination, though the risk from forgone revenues and the associated costs of product recalls are presumed to increase proportional to sales. This component of risk is likely to be scale-neutral, suggesting that the optimal private investment in food safety is independent of whether a product is produced in a concentrated industry or by many small firms. But large firms are more likely than small operations to have brand capital, which is threatened by food scares traced to their products. The loss of a firm’s good reputation can occur as the consequence of a food scare independent of the magnitude of the scare and the firm’s market share. With the loss of reputation, a brand-name product would lose its price premium. Sales and margins for products produced by the firm unrelated to the food scare are also likely to suffer.Large firms with good reputations, therefore, stand to incur losses from lapses in food safety that are disproportionately higher than small firms. Losses to a firm from an outbreak of food borne illness often also include product liability for related illness and death. Judgments can easily reach into tens of millions of dollars. If food contamination occurs in the field, then the magnitude of the outbreak may be independent of the market share of the responsible producer. However, if contamination were to occur in a processing facility,hydroponic vertical farming then the greater quantities handled in the facility and distributed through a wider network could cause illnesses and fatalities to be greater for outbreaks caused by larger firms.Liability is limited to the assets of the firm; thus, it increases as production increases because the assets of larger producers are greater.

Limited liability explains, in part, why firms that operate in hazardous industries with latent damages tend to be smaller than firms in other industries. Divestiture is recognized as a mechanism to limit liability. Small growers, then, face an upper bound on the risk of food contamination that lowers their incentives to invest in food safety. For instance, the Colorado cantaloupe grower whose melons are implicated in 32 deaths filed for bankruptcy in May 2012, listing its net worth at -$400,000. The farm’s owners, therefore, avoid potentially tens of millions of dollars in liability. Because large firms generally have more assets, they face greater exposure from product liability, and, therefore, demand greater protection against food contamination. Large firms and small firms alike can insure against product liability, but premiums are positively correlated with coverage limits and with past lapses in food safety, so that large firms demand greater ex ante prevention of food contamination even if they are insured. Large firms face more-than-proportionally greater risk from selling tainted food, and, consequently, have greater demand for food safety. They also face lower average variable costs in supplying food safety. Therefore, the optimal level of investment, which equates the marginal benefit of an incremental increase in food safety with the marginal cost of achieving it, is increasing at an increasing rate in firm size. Holding market supply of food constant, then, the provision of food safety declines as the number of producers grows. Empirical evidence from the meat and poultry-processing sectors indicates that firm size is an important determinant of firm-level investment in food safety. For instance, research suggests that firm size has more impact on the adoption of safety and quality assurance practices than any other firm characteristic. Large firms are also more likely to have adopted a range of food safety technologies.The losses stemming from an outbreak of food borne illness often are not limited to those firms implicated in food contamination. Indeed, food safety regulators issue broad warnings about food products irrespective of where they originated if the origin of contamination cannot be immediately identified.

For instance, when in 2006 an outbreak of E. coli was linked to consumption of bagged fresh spinach, the FDA issued a blanket warning to consumers to avoid the product altogether. Fifteen days later, the alert was scaled back to include a warning against consumption only of specific brands of spinach packaged in California on specific days, but the industry had already experienced a dramatic decline in sales. The outbreak, blamed for 199illnesses and three deaths, cost spinach producers $202 million in sales over a 68-week period, a 20% decline. By 2008, demand for California spinach remained below pre-outbreak levels. Because a food scare can induce a negative demand shock, there exists a reputational externality associated with food safety provision, which, consequently, exhibits public good characteristics. The benefits of an individual firm’s food safety investments accrue, in part, to competing firms. Because the firm does not capture the full benefits of its food safety provision, it will under invest in food safety relative to the efficient level. Industry-wide private provision may be much too low. A firm’s share of the benefit from an investment in food safety, however, is increasing in its market share. The more concentrated the industry, therefore, the closer is the equilibrium level of food safety provision to the efficient level. A monopolist would internalize the full benefit of his investments, and, therefore invest in the efficient level of food safety. As the number of small firms increases, however, the equilibrium market-wide provision of food safety falls increasingly short of the efficient level.Because agronomic and climate conditions impact the optimal handling and processing of crops, they afford some firms and regions a comparative advantage in producing safe food. For instance, the hot and dry conditions during the cantaloupe-growing season in California reduce the crop’s exposure to contaminants that can be transferred to melons in wet fields. Moreover, because the dry conditions keep California cantaloupe relatively clean, most are packed directly in the field, requiring less handling and avoiding exposure to food pathogens in shed packing operations that rinse and dry the produce.

As retailers seek to market local produce, however,vertical hydroponic garden comparative advantage in the provision of food safety is forsaken. The Listeria outbreak last summer was linked to unsanitary conditions at the Jensen Farms packing shed. And an FDA investigator identified unsanitary practices at the Chamberlain Farms packing shed that has been associated with this summer’s Salmonella outbreak. Concentration of production in regions with comparative advantage creates agglomeration advantages for the mutual provision and certification of food-safety practices. Because of the potential losses from food scares and the market-wide externalities from food safety investments, grower organizations have adopted voluntary process standards to mitigate risk and avoid shirking among their members. Some growers have also created marketing orders to enforce handling practices and require audits of all operations covered by the agreement. The California Leafy Green Product Handler Marketing Agreement was implemented in 2007, following the 2006 E. coli outbreak linked to spinach from California’s Salinas Valley. The California Cantaloupe Advisory Board responded to last summer’s Listeria outbreak by imposing mandatory certification by state auditors of all growers in the state. Such industry-wide cooperation and self policing is likely to be lost when production is fragmented and spread across wide geographical areas in the quest for local production.Irrigated agriculture exerts strong controls on global food production yields and the water cycle while accounting for 85-90 percent of human consumption . However, little is known about the spatial distribution of agricultural fields, their crop types, or their methods of irrigation. Spatially explicit knowledge of these field attributes is necessary in order to implement more water-efficient agricultural practices and plan for more sustainable economies . However, most remote sensing based mapping efforts cover limited political boundaries, on the order of U.S. states or smaller, and usually only cover a snapshot in time. Agriculture maps in developing countries are even more lacking in semantic detail, coverage, and resolution, which is a particularly acute problem given that agricultural expansion in these regions tends to be decentralized and without a guiding management plan for water sustainability. While a fine scale and up-to-date census of global agriculture does not yet exist, it is feasible that we can map a subset of fields that are spectrally and visually distinct from surrounding land cover and numerous enough to train a machine learning model to map a substantial subset of global agriculture. Center pivot irrigation fits these criteria; they are relatively uniform in shape, have a narrow range of sizes within particular geographies, and in drylands, can be strongly contrasted with the surrounding lack of vegetation .

Like all remote sensing applications for agriculture detection, image obstruction by clouds, fallow periods and the growth of non-agricultural natural vegetation in the landscape all pose challenges for models that detect center pivots. In more humid environments, center pivots have distinctive growing season patterns and larger amplitude changes in vegetation “greenness” compared to other vegetation types, making them readily identifiable with time series of multi-spectral images. However, it’s difficult to assemble a comprehensive time series of imagery over many parts of the world, primarily due to cloud occlusion and scene availability. Scene availability is particularly low for dates prior to the launch of the Sentinel-2, so a method to map center pivots using single date imagery in many parts of the world is desirable. Such a method is particularly valuable given that center pivots are one of the most ubiquitous irrigation sprinkler systems employed in large scale commercial agriculture and make up a considerable fraction of the unplanned agricultural expansion in developing countries. There are a variety of methods for mapping objects using remotely sensed imagery, but object based classification has been shown to outperform per-pixel classifiers in cases where the object of interest contains enough pixels to distinguish objects by textural or shape properties . Because center pivots are large relative to the resolution of public sensors like Landsat, they are amenable to being mapped using object based classifiers. The traditional approach to object based classification in remote sensing has been to use manually tuned algorithms to delineate edges or engineered features that capture texture and shape properties combined with per-pixel machine learning algorithms . However, traditional object based classifiers in remote sensing have a tendency to over fit and must be manually tuned or supplemented with region specific post processing to arrive at a suitable result. Another class of methods which make use of convolutional neural networks have achieved great success on complex image recognition problems in true color photography. These have not been thoroughly evaluated for mapping agricultural fields as instances across a large climate gradient using Landsat imagery. The goals of this research are twofold. First, I evaluate the performance of convolutional neural network based instance segmentation models on Landsat imagery. This experiment determines if CNN based models can make use of Landsat’s 30 meter resolution to provide reliable predictions of the locations and extents of center pivot agriculture in various states of development, including cleared, growing, and fallow. I test this approach by using the current most popular and near state of the art Mask R-CNN model, an approach based on a lineage of regional CNNs which jointly minimize prediction loss on region proposals, object class, refined object bounding boxes, and an object’s instance mask. This model is tested on the 2005 CALMIT Nebraska Center Pivot Dataset, which is divided into geographically independent samples that were partitioned into a training, validation and testing set. Multiple model runs with varying hyperparameters and preprocessing steps were conducted to arrive at the most accurate result on the validation set, and the final most accurate model was applied to the test set to produce the final reported accuracy. The model is also evaluated on the full training data set and 50% of this dataset in order to examine the effect of reduced training data on model accuracy over a large region. Second, I compare these results to the Fully Convolutional Instance Aware Segmentation model, which was previously the state of the art in instance segmentation in true color photography prior to Mask R-CNN.