Crop yield and quality are measured annually at harvest

Kerala faces an economic situation that encourages farmers to adopt practices that exacerbate climate change and biodiversity loss, erodes coffee quality, and undermines farmer’s livelihoods. In the biodiversity hotspot of Wayanad, Kerala, weather is getting hotter and drier, particularly during months when precipitation is vital to the life cycle of the coffee plant. These trends compound the existing threats faced by farmers and biodiversity in the region. Personal experience of farmers in Wayanad was corroborated by quantitative analysis. Researchers studying agricultural systems should engage with local constituents to guide research efforts, particularly in regions where tribes have yet to be disenfranchised and ancient knowledge is still intact. Reforestation, reinstatement of traditional intercropping methods, and regeneration of healthy soils are likely to be the most effective strategies for both climate and economic resilience. Price stabilizing mechanisms such as a guaranteed floor price should be reinstated, as they were in the past, to protect farmers against multinational vested interests. Policies that incentivize carbon sequestration and reforestation should be implemented in order to mitigate and adapt to climate change, and to provide better livelihoods for the people that grow crop commodities. Consumers should be engaged and educated on these issues in order to shift market forces towards business practices that support these efforts. Readers of all affiliations should consider the global impacts of their daily choices as consumers, strive towards lifestyles that eliminate frivolous use of resources and promote ethical economies,commercial greenhouse supplies and participate in the world in a responsible way for the sake of all life on Earth, including present and future generations.

Agricultural productivity in the United States has increased dramatically over the last few decades, but in the face of climate change current management practices might not sustain current levels of production . Some practices that achieve high crop yields and profit — for example, minimal use of crop rotations, high rates of fertilizer and pesticide inputs, minimal carbon inputs and soil disturbance — also result in degradation of ecosystem processes on which agricultural systems rely. Such degradation can reduce resilience, making these systems more vulnerable to high temperatures and uncertainty in water supply, resulting in lower productivity in times of extreme weather conditions, such as prolonged drought . Climate-smart agriculture means increasing resiliency to extreme and unpredictable weather patterns induced by climate change by following three principles: developing agricultural cropping systems that are productively resilient in the face of climate change; reducing greenhouse gas emissions attributable to agriculture to further reduce contributions to global warming; and proactively and adaptively managing farms in a way to buffer farm productivity and profitability against the negative effects of climate change. To make informed, evidence-based management decisions under new climate change regimes, data is needed from long-term agricultural experiments, few of which exist. As weather and climate patterns change, repeated measurements over decades can reveal what may be slow but incremental changes in crop yield and quality, as well as soil quality and biodiversity . A long-term agricultural experiment, known as the Century Experiment, is underway at the Russell Ranch Sustainable Agriculture Facility , a unit of the Agricultural Sustainability Institute at UC Davis. RRSAF is a 285-acre research facility and working farm where, under realistic commercial scale conditions, controlled long-term experiments are testing a variety of crop systems and management practices related to fertility and nutrient management, irrigation and water use, energy use, greenhouse gases and soil health.

The Century Experiment was designed as a 100- year replicated experiment. It was initiated in 1992, when environmental and soil conditions were monitored as a baseline prior to installation in 1993 of 10 cropping systems across 72 one-acre plots; since then, one additional cropping system and restored native grassland reference plots have been introduced . Soil and plant samples are collected regularly and analyzed, and sub-sampled for archive and future analysis.Energy use, inputs and outputs are monitored for all equipment and groundwater pumping throughout the year. The interior of each 1-acre plot in the Century Experiment is maintained consistently for collection of the long-term dataset. Microplots and strips within each plot are available for additional experimental investigations, which have included the impacts of different fertilizers or crop varieties, pest management practices, tillage practices and soil amendments. RRSAF research is also conducted in additional plots that are not part of the Century Experiment to focus on questions that explore practices that may ultimately be adopted within the main experiment. This research includes targeted investigations of soil amendments, irrigation frequency and type, and new crop varieties, and it permits side-by-side comparisons of management history on the effectiveness of different practices. UC and UC Agriculture and Natural Resources researchers and the RRSAF team collaborate regularly with local growers, as well as with researchers from other institutions throughout the United States and around the world, so that the research addresses local issues and also has broader relevance for agriculture in Mediterranean climates worldwide. Maintaining healthy soils is a key to climate-smart agriculture. Properties such as porosity, water retention, drainage capacity, carbon sequestration, organic matter content and biodiversity all help to confer resilience to new pest and disease pressures and to extremes in temperature and water availability .

The California Department of Food and Agriculture’s Healthy Soils Initiative, launched in 2016, reflects the state’s commitment to improve the quality of managed soils . Encouraging best practices for maintaining healthy soils will increase biodiversity as well as beneficial physical and chemical properties of soil. Improving these properties will, in turn, confer resilience of agro-ecosystems to uncertainties in climate, including unpredictable rainfall patterns, new extremes in temperature and unexpected shifts in the distribution of pests and diseases . Intensive soil sampling is a key part of the Century Experiment. Plots are sampled at least once every 10 years to as deep as 3 meters in eight depth intervals, and a number of chemical and physical properties are measured. After 20 years, cropping systems, with few exceptions , either maintained or increased total soil carbon content to a depth of 2 meters . Soil carbon increased significantly more in the organic tomato-corn system than it did in any other crops and management systems. Soil infiltration rates and aggregate stability were also greater in the organic than conventional tomato corn system. This research also identified specific soil fractions where early changes in carbon sequestration can be detected, to help predict which practices promote increases or decreases in soil carbon. Changes in soil biology were evident as well: microbial biomass was 40% higher in soils in organic than in conventional tomato-corn rotations, and microbial community composition under organic and conventional management was distinctly different. More indepth analyses of the soil biota, including sequencing of soil microbial communities and measuring abundance of mycorrhizal fungi, are underway.Use of agricultural and food wastes, and cover crops, can reduce dependency on synthetic fertilizers that rely on fossil fuels and generate greenhouse gases in their synthesis. Also, use of soil organic amendments helps organic and conventional growers to “close the loop” by reducing energy and environmental costs of waste disposal,vertical grow and recycling valuable nutrients back into the soil. At RRSAF, composted poultry manure and winter cover crops provide sufficient nitrogen and other nutrients to the organic tomato-corn rotation. Organic tomato yields for 20 years under furrow irrigation were not significantly different from conventional tomato yields. Soil amendments and winter cover crops have led to increased soil carbon sequestration, higher infiltration rates and greater aggregate stability in the organic system compared to the conventional systems; however, these benefits may be of limited interest to growers if yields are substantially reduced. A challenge is how to combine use of organic inputs with subsurface drip irrigation for organic systems. Organic relies on solid sources of fertility, for example, cover crops and compost, that cannot be delivered in the drip line, and that rely on microbial activity to convert them into plant-available forms. In SSDI systems, only a limited area of the bed is wetted and microbial activity may be reduced. Researchers at RRSAF are investigating the feasibility of using different combinations and forms of solid and liquid organic amendments in organic tomato-corn rotations. This is particularly timely as interest in organic farming and products increases . In 2012, a long-term experiment was initiated with the soil amendment biochar, a form of charcoal made from pyrolysis of organic waste materials. Application of biochar to tomato-corn rotations at 10 tons per hectare resulted in corn yields increasing in year 2 by approximately 8%, but no other yield effects were observed over 4 years .

Biochar had no impact, however, on soil water retention. These results underscore the importance of being able to draw conclusions based on long-term research, and the experiment continues to be monitored. Water quantity and quality are critical concerns for climate-smart agriculture in chronically drought-afflicted California . SSDI may increase crop yields, reduce weed pressure and improve water management in conventionally managed systems , but the trade-offs associated with other impacts of SSDI, such as changes to soil moisture patterns, reduced microbial activity, altered accumulation of salts and reduced groundwater recharge, have received little attention. At RRSAF, researchers are comparing effects of furrow versus drip irrigation on crop yields, root growth, microbial communities and soil structure. Many changes, such as soil aggregate structure, are not evident immediately and require long-term experiments to understand and resolve. Irrigation scheduling is another focus of water management at RRSAF. Different methods and associated technologies have been compared for estimating irrigation needs, including methods based on evapotranspiration , soil moisture sensors, plant water status and remote-sensing data. In tomatoes, an ET-based method was found to better predict crop water needs than soil sensor–based methods. Research projects at RRSAF have also addressed other aspects of climate-smart agriculture. These include development of farm equipment that reduces soil disturbance and energy consumption; application of sensor technology in collaboration with NASA’s Jet Propulsion Laboratory to support data-driven management choices in response to climate variation; and comparison of the efficacy of smart water meters in groundwater wells and irrigation systems. Other investigations have measured the feasibility of using dairy and food waste bio-digestate that can help offset consumption of fossil-fuel based fertilizers; tracking changes in wheat cellulose via isotopic methods to monitor plant responses to climate change; and measuring lower greenhouse gas emissions under SSDI than furrow irrigation. New varieties of climate-smart crops, such as perennial wheat, are being evaluated for their yield and resilience in California’s Mediterranean climate. In its 20 years, the Century Experiment has demonstrated a unique value in generating climate-smart data — for example, which practices enhance carbon sequestration in California row crop soils, how irrigation can be managed to reduce greenhouse gas emissions, and what sensors help most in reducing water consumption. Future research will address how soil biodiversity, such as the symbiotic mycorrhizal fungi, can be harnessed to reduce water and nutrient inputs, and increase crop resilience. Researchers exploring mechanisms driving short- and long-term responses to global change can guide the development of decision support models that incorporate economic, agronomic, ecological and social trade-offs and provide support for decision-makers — growers, policymakers, researchers — to make management decisions in the face of increasing climate uncertainty. The American Agricultural Economic Association is composed of various groups ranging from industry to government to academia with widely divergent values and interests. This has lead to controversy, sometimes healthy and other times destructive, on the appropriate mode for graduate training and methodologies of research. These differences affect the direction and vitality of the profession and imply both benefits and costs in pursuing the solutions to various problems and issues. Pressures.for day-ta-day decision making in industry have led to reliance on methodologies that are often characterized as unacceptable for journal publication. Similarly, the timeliness of analyses in governmental policy-making processes sometimes does not lend itself well to publication in professional journals. In contrast, the research sophistication that has emerged in academic circles has reputedly widened the divergences among various groups within the AAEA. In this setting a number of personalized views have been expressed.

Higher-value crops including produce and cash crops may also be more sensitive to weather

Few products suitable to agricultural livelihoods are available, and despite the wide proliferation of microfinance institutions , most are limited to non-agricultural activities given there are substantial challenges inherent in long-cycle agricultural lending . Lenders in these contexts charge high interest rates to help offset their assessment of the risk that loans will not be repaid. These higher interest rates can, perversely, have the effect of attracting only borrowers with no intention of repaying , thus driving interest rates even higher, as lenders seek to offset increased risk , further reducing access to credit for the small-scale farmer . Group-liability microfinance models, though popular in urban markets to reach low-income borrowers through social guarantees, may be ill-suited to serve smallholders in contexts where dominant risks driving default like weather and price shocks are common among members in the localized group. Group members will be unable to insure other members who cannot pay off a loan if, for example, everyone’s harvest is devastated by the same local flood or pest. On the demand side, demand from farmers for formal credit products is low. Even where formal financial products are available, farmers may opt to borrow money from within their social networks, or informal lenders. Preliminary findings from the rollout of Kshetriya Grameen Financial Services , a microfinance portfolio in rural Tamil Nadu,aeroponic tower garden system show that 72% of farmers’ loans at the beginning of the season are from formal sources, but only 35% are from formal sources by the end of the season.

Farmers seem to shift to informal borrowing given quick loan approvals and more flexible loan terms are available. This use of informal borrowing is particularly prevalent among marginalized farmers: 82% of the agricultural loans taken out by marginal farmers were from informal sources compared to 46% among medium-landholding farmers . Even where formal financial services are available, they are often highly disadvantageous to smallholder farmers. Farmers’ credit needs are different from urban micro-credit customers for which the common micro-credit products are designed, with weekly repayments and group liability. Most loan offers and repayment schedules are poorly timed to fit seasonal production cycles and price fluctuations. Uncertainty or risk aversion can also make farmers hesitant to take on debt. Profits in farming are uncertain, and are often low without complementary investments. Options for collateral to back a loan are limited in these environments, and assets like land may be too fundamental to basic livelihood to risk in order to access a line of credit , or unacceptable to back a loan in insecure contracting environments. Accessing and using financial products can also be even more difficult for farmers without high levels of financial literacy.technology adoption, pulling from a range of rigorous non-experimental work and some theoretical work to characterize the constraint facing farmers. We then summarize findings from recent randomized evaluations in an effort to distill policy-relevant insights.Agricultural income streams are characterized by large cash inflows once or twice a year that do not align well with specific times when farmers need access to capital to either make agricultural investments or, for example, pay school fees.

If there is limited access to credit in an area, farmers may not have cash on hand to make agricultural productivity investments unless they are able to save, or can afford the potentially high interest rates of informal lending. However, saving can be difficult for farmers given their limited resources, a variety of demands on their money, and the seasonal cycle of production and prices of their agricultural production. Credit and saving products could help farmers make investments in inputs and other technologies by making cash available when needed. Yet many developing countries, and particularly rural areas, have limited access to formal financial services that could provide this liquidity. Credit constraints have been reflected in farmers self-reports , and are associated with less use of productive inputs like high-yielding varieties . On the supply side, formal financial service providers are often unwilling or unable to serve smallholders. Few products suitable to agricultural livelihoods are available, and despite the wide proliferation of microfinance institutions , most are limited to non-agricultural activities given there are substantial challenges inherent in long-cycle agricultural lending . Lenders in these contexts charge high interest rates to help offset their assessment of the risk that loans will not be repaid. These higher interest rates can, perversely, have the effect of attracting only borrowers with no intention of repaying , thus driving interest rates even higher, as lenders seek to offset increased risk , further reducing access to credit for the small-scale farmer . Group-liability microfinance models, though popular in urban markets to reach low-income borrowers through social guarantees, may be ill-suited to serve smallholders in contexts where dominant risks driving default like weather and price shocks are common among members in the localized group. Group members will be unable to insure other members who cannot pay off a loan if, for example, everyone’s harvest is devastated by the same local flood or pest. On the demand side, demand from farmers for formal credit products is low.

Even where formal financial products are available, farmers may opt to borrow money from within their social networks, or informal lenders. Preliminary findings from the rollout of Kshetriya Grameen Financial Services , a microfinance portfolio in rural Tamil Nadu, show that 72% of farmers’ loans at the beginning of the season are from formal sources, but only 35% are from formal sources by the end of the season. Farmers seem to shift to informal borrowing given quick loan approvals and more flexible loan terms are available. This use of informal borrowing is particularly prevalent among marginalized farmers: 82% of the agricultural loans taken out by marginal farmers were from informal sources compared to 46% among medium-landholding farmers . Even where formal financial services are available, they are often highly disadvantageous to smallholder farmers. Farmers’ credit needs are different from urban micro-credit customers for which the common micro-credit products are designed, with weekly repayments and group liability. Most loan offers and repayment schedules are poorly timed to fit seasonal production cycles and price fluctuations. Uncertainty or risk aversion can also make farmers hesitant to take on debt. Profits in farming are uncertain, and are often low without complementary investments. Options for collateral to back a loan are limited in these environments,dutch bucket for sale and assets like land may be too fundamental to basic livelihood to risk in order to access a line of credit , or unacceptable to back a loan in insecure contracting environments. Accessing and using financial products can also be even more difficult for farmers without high levels of financial literacy.These credit market inefficiencies result in limited access to liquid capital from formal financial services. There is policy appetite to leverage new technologies and approaches to expand formal credit and savings mechanisms to rural households, particularly given the proliferation of micro-credit in urban markets. But even where micro-credit has expanded widely among low-income urban clientele, evidence from randomized impact evaluations show limited ability for micro-credit to transform the average entrepreneur’s business productivity and revenues, instead providing value through increased flexibility in how households “make money, consume, and invest” . In the smallholder context, we focus specifically on whether expanding access to formal credit on the margin of what is already available shows potential to unlock productive, profitable investments that improve rural livelihoods. Where expanding access to credit shows potential, studies investigate product designs aiming to increase credit access and their benefits specifically for smallholder farmers.Although the experimental evidence suggests that an injection of credit alone is unlikely sufficient to transform smallholders’ livelihoods, there is some encouraging evidence from approaches with careful product design. Financial service design innovation, particularly to encourage storage or savings, can generate more supportive services for farmers that can help them make investments or manage their volatile livelihoods. There is policy appetite to identify whether digital financial services will be able to connect rural borrowers to lending institutions and encourage financial behavior conducive to agricultural investment. More research is needed on these digital financial service channels and product designs, to understand their potential to support farmers’ financial portfolios in a manner that protects farmers while encouraging profitable investments. More research is needed to develop and test credit product designs and delivery channels that fit smallholders’ needs with respect to the timing of offers, repayment structures, and collateral agreements.Smallholder farmers have limited buffer stocks to cope with volatile food prices and climate uncertainty, and typically have few formal financial services to protect them from risk. The systemic risks of agricultural production jeopardize smallholder farmers’ ability to recoup their investments at harvest. Risk exposure therefore plays an important role in farmers’ agricultural investment decisions, including the use of productive inputs like fertilizer .

Rural communities have developed many informal mechanisms to cope with risk. For example, households may buy or sell assets in response to fluctuations in income , and communities may temporarily assist households experiencing a negative shock like an unexpected medical expense with the expectation that the household will do the same for others in the future . While these strategies are useful, in many cases they are insufficient. Farmers face many sources of uncertainty beyond weather and environmental factors including natural disasters, pests, and disease. Price risk and relationships with output markets can jeopardize farmers’ ability to recoup their investments at harvest, and such risks can depress productive input use. In addition to the risks inherent in the agricultural production status quo, new technologies often bear specific risks, such as uncertainty about how to use the technology correctly and how to market the output16. The classic economic view of poor farmers is that their lack of savings and other resources to fall back on causes them to prefer agricultural approaches with more reliable, but lower, average returns. Households often diversify their sources of income to spread around risk . Farmers may see the adoption of new technologies as risky, especially early in the adoption process when proper use and average yields are not well understood. Technologies that carry even a small risk of a loss may not be worth large expected gains if risks cannot be offset .So, while investments exist that could increase profitability, these may also increase the risks of farming. Behavioral biases also come into play around risky decisions . Risk averse farmers may prefer a more certain, but possibly lower, expected payoff over an uncertain payoff from unfamiliar technologies. Ambiguity aversion can lead farmers to stick to their status quo, preferring known risks with a more familiar probability of gains and losses, rather than unknown risks, even in cases where these choices may actually be less risky. Both risk and ambiguity aversion are important considerations when looking to encourage take-up of novel risk mitigating financial products or technologies . Evidence exists that rural households are able to mitigate idiosyncratic risk , but that rural residents are relatively unprotected against aggregate risks – weather and crop price shocks – common to smallholder rain-fed agriculture in poorly integrated markets . Given extreme weather events can destroy a large portion of harvest across a region, and that such weather events are only increasingly likely given global trends including climate change, there is a need for effective risk-mitigation strategies to protect farmers from these aggregate risks.“Linking credit with insurance has mixed results, suffering from the same demand problems that have beset standalone index insurance. The offering of indemnified loans that interlink an insurance product with credit appears promising, but demand for such loans has been shown to be surprisingly low in the few trials that have tested this mechanism ” . Linking credit with insurance has even been shown to drive down credit demand . Recent research has found that companies that engage in contract farming can be well-positioned to adjust the timing of insurance and payment arrangements to increase take-up. Casaburi and Willis find that when a large private company engaged in contract farming in Kenya offered to provide insurance to sugar cane producers by deducting premiums from farmer revenues at harvest time, take up rates at actuarially fair prices were 71.6%, 67 percentage points higher than the equivalent standardly timed contract.

Juvenile salmon did not exhibit a habitat preference among these three choices

When cages were used, salmon were PIT tagged to track individual fish growth rates within a specific habitat. We consistently found that salmon growth rates in cages placed in flooded in rice fields were higher than growth rates for juvenile Chinook Salmon of comparative life stage in any of the adjacent riverine habitats and in other regions . Growth rates were also comparatively high when free-swimming salmon were introduced into larger-scale, 0.8-ha flooded agricultural fields. These studies were more representative than those using cages of how migrating salmon might use these habitats under natural flow events. For the multiple years that free-swimming salmon were used , they averaged a mean daily growth rate of 0.98mm d−1. Throughout all study years, caged salmon and free-swimming salmon showed very similar growth rates within the same experimental study units, despite the fact that they likely experienced different micro-habitat conditions . This observation suggests that our salmon growth results were not influenced by cage effects, a well-known issue in enclosure studies . To better understand managed floodplain processes across the region, in 2015, salmon were introduced in fields at a variety of locations in the Central Valley with various vegetative substrates: Sutter Bypass , three locations on the Yolo Bypass , and Dos Rios Ranch at the confluence of the Tuolumne and San Joaquin rivers . At all of the locations,strawberry gutter system juvenile Chinook Salmon grew at rates similar to those observed in experiments conducted at Knaggs Ranch in the Yolo Bypass during previous study years.

These results suggest that multiple geographical regions and substrate types can support high growth rates of juvenile Chinook Salmon. A key objective of our work in flooded fields was to determine whether substrate type has a measurable influence on growth and survival of juvenile Chinook Salmon. Substrate and vegetation can be an important micro-habitat feature for young Chinook Salmon , so we posited that there could potentially be some difference in salmon performance across treatments. In 2013, we examined this question across different substrate types in two ways: telemetry studies using PIT tags; and replicated fields. Both approaches indicated that juvenile salmon did not have a clear preference for different substrates, and grew at similar rates across substrates.We monitored the movements and use of PIT-tagged, hatchery-origin juvenile Chinook Salmon for approximately 1 month over fallowed, disced, and rice stubble substrates in two circular enclosures to determine if there was any preferential use. One enclosure included all three substrates, and one contained only disced substrate .Although growth rates were slightly higher in the enclosure that contained all three substrate types, juvenile salmon grew at very high rates, averaging 1.1mm/d regardless of enclosure. These growth rates were higher than other published growth rates for juvenile Chinook Salmon in the Yolo Bypass, and the region generally .Throughout the 2012–2016 study period, we consistently observed that juvenile Chinook Salmon were attracted to sources of inflow, and that this sometimes became the dominant factor in the distribution of salmon on experimental fields or in enclosures.

In the previously described PITtag observations in 2013, salmon in both enclosures positioned themselves nearest the inflow, regardless of surrounding habitat structure . This result is not surprising, given that juvenile stream salmonids commonly adopt and defend flow oriented positions in stream environments for acquisition of drifting food resources. On flooded agricultural fields, this orientation toward flow may not only be related to feeding behavior but may also serve to keep juvenile salmon in habitat areas that are hydrologically connected and have higher velocities. In fact, analyses of the environmental factors that predict movement of large groups of tagged juvenile Chinook Salmon in the Yolo Bypass found that drainage of flooded areas was a reliable predictor of fish emigration at downstream trapping stations . Although juvenile Chinook Salmon growth rates were consistently high across substrates and study years, we observed highly variable survival of salmon, and available evidence from the studies suggests that this was related, at least in part, to differences among years in drainage rates of the study fields and habitat availability on the floodplain at large. For example, survival in 2013 ranged from 0.0% to 29.3% in the replicated fields containing different agricultural substrates. This variability was likely unrelated to substrate type; instead, these low survival rates were most likely a result of very dry conditions across Yolo Bypass and the Central Valley, generally, when record drought conditions prevailed during 2012–2015, which affected water quantity and quality. In 2013, our replicated field study likely presented one of the only wetted floodplain areas for miles around, and thus presented a prime feeding opportunity for avian predators such as cormorants, herons, and egrets. However, when the same set of fields was used in 2016, survival was much higher . This was generally a wetter period, avian predation pressure was reduced, and we more efficiently opened the flash boards to facilitate faster drainage and fish emigration.

Note, however, there were some differences in methodology among years, which may have contributed to survival variability. Taken together, these observations of free swimming salmon survival suggest that field drainage rate, and overall floodplain habitat availability, are important factors for improving survival in managed agricultural floodplain habitats. Our observations of juvenile salmon orientation to flow, and the importance of efficient drainage on survival, reinforce observations from natural floodplains that connectivity between perennial channel habitat and seasonal floodplain habitat is an essential attribute of river-floodplain systems . Connectivity of managed floodplain habitats to unmanaged habitats in the river and floodplain is therefore a foundational condition needed to allow volitional migration of juvenile salmon. Further research is needed to identify how to provide sufficient connectivity to maximize rearing and migration opportunities for wild Chinook Salmon.Natural and managed floodplain habitat is subject to a variety of flow and environmental conditions. Variation in flow and temperature dictates when and where managed agricultural habitats may be accessible and suitable for rearing salmonids, with challenges during both wet and dry years, as well as during warm periods. As noted previously, survival in the replicated fields was variable but generally low. We associate these results with the effects of extreme drought conditions that prevailed during the core of our study from 2012 through 2015. Although our field studies were conducted during a time of year when wild salmon have historically used the Yolo Bypass floodplain , the extreme drought made for warm water temperatures, and resulted in our study site being one of the few inundated wetland locations in the region. As such, avian predators were attracted to the experimental fields, exacerbating salmon mortality during drainage. We observed high concentrations of cormorants, herons, and egrets on the experimental fields, and this concentration increased over the study period. As many as 51 wading birds and 23 cormorants were noted during a single survey. The small scale of our project could have further exacerbated predation issues. This situation highlights the importance of the weather dependent,fodder system for sale regional context of environmental conditions, which govern how and when managed floodplains can be beneficial rearing habitats for juvenile salmon. Under certain circumstances, flooded fields can generate high salmon growth but in other scenarios, these habitats can provide poor environmental conditions for salmonids and/or become predation hot spots. Even during wetter conditions, we found that management of agricultural floodplain habitat was challenging. For example, we had hoped to test the idea of using rice field infrastructure to extend the duration of Yolo Bypass inundation events in an attempt to approximate the longer-duration events of more natural floodplains; that is, through flood extension. As noted by Takata et al. , use of the Yolo Bypass by wild Chinook Salmon is strongly tied to hydrology, and salmon quickly leave river-inundated floodplains once drainage begins. We therefore reasoned that flooded rice fields might provide an opportunity to extend the duration of flooding beyond the typical Yolo Bypass hydrograph. In 2015, a flood extension study was planned but not conducted because drought conditions precluded Sacramento River inflow via Fremont Weir. To test the flood extension concept in 2016, we needed substantial landowner cooperation and assistance to install draining structures that allowed maintenance of local flooding after high flow events. Even then, we found it difficult to maintain water levels and field integrity during the tests. In our case, we were fortunate to have the cooperation of willing landowners. Partnership with landowners was key, and would be critical with any future efforts to test the concept of flood extension. We also planned a similar test in 2017, but high and long-duration flood flows prevented the study from occurring.

Over the 6 years of study, except perhaps for 2013 when we focused on other study priorities, we never experienced ideal conditions to adequately test the flood extension concept. We were either in a severe drought, during which the Yolo Bypass did not flood from the river, or we experienced severe and sustained flooding, which made it impossible to contain flood waters within study fields. Based on these experiences, studying the concept of flood extension appears to depend on the occurrence of moderate flood events at the right time of year , provided fields are appropriately designed to hold water and allow efficient immigration and emigration of potentially large numbers of juvenile salmon. However, significant outreach and communication is necessary with landowners to maintain floodwaters on their fields during the natural drainage period. Because these events cannot be predicted well ahead of time, these communications—and availability of robust infrastructure—need to be constantly maintained even outside the flood extension period. As suggested in the previous section, such potential actions would need to be taken in a way that maintains hydrologic connectivity and salmon access, so that salmon can successfully locate potential managed habitats, use them for rearing, and then successfully emigrate from them at the appropriate time. Timing of such potential manipulations is critical because previous sampling has shown that salmon quickly emigrate from the floodplain during large scale drainage events , leaving relatively low densities of salmon in remaining ponded areas to potentially benefit from flood extension. Although our use of hatchery salmon gave us more experimental options during drought conditions, the use of these fish resulted in additional challenges. Our approach relied on a non-traditional use of hatchery salmon, which required a suite of permits and approvals to execute the project. As noted above, the project coincided with a major drought, so access to hatchery salmon was limited as a result of low salmon population levels. In addition, use of hatchery salmon affected the time-period when we could conduct experimental work. We were unable to test salmon response to early season flooding , because the hatchery salmon were too small to receive coded-wire tags as required under our permit conditions. Similarly, the timing of our work was affected by the availability of holding tanks at our partner hatchery , and by the availability of transport staff and vehicles to move salmon to our study site. While we were able to assess many important biological metrics in our study, direct measurement of the population-level effect of floodplain rearing on agricultural habitats proved elusive. A traditional approach to addressing this question involves inserting CWTs into very large numbers of experimental salmon and estimating the population response from expanded CWT recaptures in the ocean fisheries. Recoveries of CWTs in adult salmon from experimental releases made in the Yolo Bypass have generally been very low , making it difficult to get a high level of resolution with which to reliably compare survival rates, including with values in the literature. Although CWT recoveries could potentially be improved by increasing the number of tagged salmon, the effort required even to collect a single data point would be substantial and is limited by the availability of surplus hatchery salmon. A related issue is that it is difficult to design a survival experiment that provides a useful comparison to other management strategies or migration corridors. For example, it is challenging to assess the incremental survival value of flooded agricultural habitat versus adjacent perennial channels . Telemetry can partially address this issue, but current acoustic tagging technology does not allow estimates of survival once smolts emigrate from the estuary, and is also limited in the size of salmon that can be tagged.

The estimated impact of warming remains robust across all possible combinations

A decline in some commodity markets and a shift in federal crop subsidy programs in the mid 1980s affected different growing regions in different ways. Under these circumstances it would not be surprising if the coefficients on the climate variables varied somewhat over time. In fact, however, they are very robust. Pair-wise Chow tests between the pooled model and the four individual census years in Table 3 reveal that the five climatic variables are not significantly different at the 10 percent level in any of the ten tests. Although we have excluded western counties because their agriculture is dependent on irrigation, what about irrigated areas east of the 100th meridian? To test whether these are affecting our results, we repeat the estimation excluding counties where more than 5% of farmland area is irrigated, and where more than 15% of the harvested cropland is irrigated.We also examine further the influence of population, excluding counties with a population density above 400 people per square mile or a population total above 200,000. The exclusion of the three sets of counties leaves the coefficient estimates virtually unchanged, and the lowest of the three p-values for the test of whether the five climate variables have the same coefficients is 0.85. It is not surprising that excluding irrigated counties east of the 100th meridian has little effect on our regression results,30l plant pot since very few are highly irrigated, and all receive a substantial amount of natural rainfall. Under these circumstances, irrigation is a much smaller supplement to local precipitation, small enough to have little effect on regression results.

By contrast, the p-value for the test of whether the five climate coefficients are the same in counties west of the 100th meridian is 1011. Including western counties that depend crucially on large-scale irrigation significantly alters the equation. To test whether the time period over which the climate variables are calculated makes any difference, we replicate the analysis using as alternatives to the 30-year histories on which the estimates reported in Table 3 are based, 10- and 50-year averages. Neither of the alternatives yields climate coefficients significantly different from the pooled regression results based on the 30-year histories. These tests suggest that our model is stable for various census years, data subsets, and climate histories. Nevertheless, one might wonder whether there could be problems without liers or an incorrect parametrization. We briefly address these concerns. In a test of the robustness of our results to outliers, the analysis is replicated using median regression, where the sum of absolute errors is minimized both in the first-stage derivation of the parameter of spatial correlation and in the second stage estimation of the coefficients. Again, the climatic variables remain robust and are not significantly different. To test the influence of our covariates on the results we follow the idea of Leamer’s extreme bound analysis and take permutations of our model by including or excluding each of 14 variables for a total of 16,384 regressions.No sign switches are observed in any of the five climatic variables, again suggesting that our results are very stable. Further, the peak-level of degree days is limited to a relatively narrow range. We check sensitivity to the assumed length of the growing season by allowing the season to begin in either March, April, or May and end in either August, September, or October.Finally, in order to examine whether the quadratic specification for degree days in our model is unduly restrictive, we estimate a penalized regression spline for degree days 8◦C −32◦C and find that the quadratic approximation is consistent with the data.

Before turning to the determination of the potential impacts of global warming on the agricultural sector of the U.S. economy as measured by predicted changes in farmland values we briefly consider whether farmers’ expectations have changed over the period covered by our study, and whether this may affect our estimates. In the previous section we regressed farmland values on past climate averages, even though farmland values are determined using forward-looking expectations about future climate. The weather in the U.S. over the past century was viewed as a random drawing from what until recently was thought to be a stationary climate distribution. Our own data are consistent with this: the correlation coefficients between the 30-year average in 1968-1997 and the two previous 30-year averages of the century, i.e., for 1908-1937, and 1938-1967, are 0.998 and 0.996 for degree days , 0.91 and 0.88 for degree days , and 0.93 and 0.93 for precipitation variable. Accordingly, when we use the error terms from our regression and regress them on past values of the three climate averages, none of the coefficients is statistically significant. The same result holds if we move to the shorter 10-year climate averages. This suggests that past climate variables are not a predictor of farmland values once we condition on current climate. As pointed out above, consecutive census years give comparable estimates of the climate coefficients in our hedonic equation and none of them are significantly different.18 Similarly, we check whether the aggregate climate impacts for the four emission scenarios in Table 5 change if we use the 1982 census instead of the pooled model. Even though the standard deviations are fairly narrow, t-tests reveal that none of the eight mean impact estimates are significantly different . We conclude that our results are not affected by any significant change in expectations over the study period.

In the calculations which follow we use the regression coefficients from the semi-log model, which we have shown to be both plausible and robust, along with predictions from a general circulation model to evaluate the impacts of climate change. The climate model we use for this analysis is the most recent version of the UK Met Office Hadley Model, HadCM3, recently prepared for use in the next IPCC Assessment Report. Specifically, we use the model’s predicted changes in minimum and maximum average monthly temperatures and precipitation for four standard emissions scenarios identified in the IPCC Special Report on Emissions Scenarios . The chosen scenarios span the range from the slowest increase in greenhouse gas concentrations , which would imply a little less than a doubling of the pre-industrial level by the end of the century, to the fastest , associated with between a tripling and a quadrupling, and include two intermediate scenarios . We use the 1960-1989 climate history as the baseline and calculate average predicted degree days and precipitation for the years 2020-2049 and 2070-2099. The former captures impacts in the near to medium term, while the latter predicts impacts over the longer term, all the way to the end of the century, the usual benchmark in recent analyses of the nature and impacts of climate change.Predicted changes in the climatic variables are given in Table 4. Impacts of these changes on farmland values are presented in Table 5 for both the 2020- 2049 and 2070-2099 climate averages under all four emissions scenarios. Not surprisingly,pots with drainage holes results for the near-term 2020-2049 climate averages are similar under all four scenarios. The relative impact ranges from a 10% to a 25% decline in farmland value, which translates into an area-weighted aggregate impact of -$3.1 billion to -$7.2 billion on an annual basis.Although the aggregate impact is perhaps not dramatic, there are large regional differences. Northern counties, that currently experience cold climates, benefit by as much as 34% from the predicted warming, while others in the hotter southern states face declines in farmland value as high as 69%. Similarly, average relative impacts are comparable across scenarios for the individual variables degree days and degree days , but again there are large regional differences. The effect of an increase in the latter variable is always negative because increases in temperature above 34◦C are always harmful, while the effect of the former variable depends on whether a county currently experiences growing conditions above or below the optimal number of degree days in the 8 32◦C range.The impact estimates for the longer-term 2070-2099 climate average become much more uncertain as the range of predicted greenhouse gas emission scenarios widens. Predicted emissions over the course of the century are largely driven by assumptions about technological change, population growth, and economic development, and compounding over time leads to increasingly divergent predictions. The distribution of impacts now ranges from a average decline of 27% under the B1 scenario to 69% under the A1FI scenario. At the same time, the sharp regional differences observed already in the near to medium term persist, and indeed increase: northern counties generally benefit, while southern counties generally suffer.

Anexception is found in Appalachia, characterized by a colder climate than other counties at a similar latitude. Regional differences widen as counties with a very cold climate can benefit from continued warming: the maximum positive relative impact now ranges from 29% to 52%. However, the total number of counties with significant gains decreases in most scenarios. For the 2020-2049 time span, 446, 126, 269, and 167 counties, respectively, show statistically significant gains at the 95% level for the scenarios given in Table 4. These numbers change to 244, 202, 4, and 26 for the 2070-2099 time span. By the same token, the number of counties with statistically significant loses increases from 1291, 1748, 1762, and 1873 for the 2020-2049 time span to 1805, 1803, 2234, and 2236 for the 2070-2099 time span. The regional distribution of impacts is shown in Figure 1 for counties with significant gains and loses under the intermediate B2-scenario. The predicted changes are also closer to those in another general circulation model, the DOE/NCAR Parallel Climate Model , which we use as an alternative because it is considered a low-sensitivity model, as opposed to the mid-sensitivity HadCM3; for a given CO2 scenario the temperature changes are lower under the PCM than under the Hadley model. We replicated the impact analysis using the PCM climate forecasts in the appendix available on request. Not surprisingly, the predicted area-weighted aggregate damages are lower. However, the regional pattern remains the same: out of the 73% of counties that have statistically significant declines in farmland values under all four Hadley scenarios by the end of the century, 73% still have significant losses under the PCM A1FI model and 0.7% switch to having significant gains. The magnitude of temperature changes simply shifts the border between gainers and losers. Some of the predicted potential losses, in particular for the high emissions scenario in the later period toward the end of the century, are quite large. However, average temperature increases of 7◦Cwould lead to the desertification of large parts of the South. A way of interpreting the results that places them in the context of other studies and also highlights the role for policy, is that if emissions are fairly stringently controlled over the course of the coming century, as in B1, such that atmospheric concentrations of greenhouse gases remain alittle below double the pre-industrial level, predicted losses to agriculture, though not trivial, are within the range of the historically wide cyclical variations in this sector. If on the other hand concentrations climb beyond three times the pre-industrial level, as in A1F1, losses go well beyond this range. This suggests a meaningful role for policy involving energy sources and technologies, since choices among feasible options can make a major difference. A complete impact analysis of climate change on U.S. agriculture would require a separate analysis for counties west of the 100th meridian. Based on the information presently available, we do not believe the impact will be favorable. A recently published study down scales the HadCM3 and PCM predictions to California and finds that, by the end of the century, average winter temperatures in California are projected to rise statewide by about 2.2 ◦C under the B1 scenario and 3.04.0 ◦C under the A1FI scenario . Summer temperatures are projected to rise even more sharply, by about 2.2 4.6 ◦C under the B1 scenario and 4.1 8.3 ◦C under the A1FI scenario. Winter precipitation, which accounts for most of California’s water supply either stays about the same or decreases by 15-30% before the end of the century.

Planned land use projections are categorized in terms of planned land use designation

Areas showing “no benefit” or that were not included in statewide calculations were not included in mapping analyses. These areas of no benefit are likely due to a combination of factors, including soil properties such as high clay or sand soils, and organic-rich soils. To help evaluate a more accurate representation of agricultural lands in San Diego unincorporated county, this study combines two agricultural data sources. These sources include the Farmland Mapping and Monitoring Program and agriculture listed in SANDAG’s current land use. FMMP aims to show the relationship between the quality of soils for agricultural production and the land’s use for agricultural, urban, and other purposes . Agricultural land is ranked as “unique”, “prime”, “grazing lands”, and “important” locally and/or statewide based on soil quality as a metric for quality and irrigation status as a metric for status of use . While FMMP helps identify the quality and location of the region’s designated farmland, it is important to consider that FMMP may under represent the total agricultural land that exists in San Diego. Furthermore, the lands that are not represented and/or classified with FMMP are important features of the region, and are thus important to include in analysis. Current land use maps from the county’s data portal utilize “extensive” and “intensive” to illustrate current agricultural land . Extensive and intensive lands are combined with FMMP lands to provide a more accurate and complete context of existing agriculture. Combined,good drainage pots these lands represent the agricultural lands study area used throughout this report. Several conservation programs exist in efforts to preserve San Diego’s agricultural lands.

Easements and formal protection of land listed on the California Conservation Easement Database and California Protected Areas Database are included in analyses to understand the areas of agricultural land study area that are currently protected . Planned land use layers from SANDAG Regional GIS Data Warehouse are created for the Regional Growth Forecast, outlining projected growth for the San Diego region to suitable areas.Non-agricultural land use types are separated as “urban”, including commercial, industrial, and/or urban designation, or non-urban, including water and open space/parks. Land use data planned for urban use by 2050 are overlaid with existing agricultural lands to identify agricultural lands threatened by future urban development . The county-averaged difference from the baseline scenario is -0.25 inches per year for CWD and 0.25 in/yr for AET. This represents that soil management of 3% added SOM can yield a -0.25 change in CWD from the baseline average of 15 and a 0.25 change in AET from the baseline average of 10 in/yr. Additionally, soil moisture has an average change of 1.69 from the baseline average of 9.1 . The entire San Diego region shows a total hydrologic benefit area of 590,582 acres, with 14% of benefit area within the incorporated county and 86% of benefit area within the unincorporated county. These results indicate that many areas in San Diego have the potential to experience increase in forage production , reduced landscape stress and irrigation demand , and increased hydrologic resilience to climate change . Analyses show that CWD and AET have the most significant changes under a +3% SOM management scenario, and thus, the hydrologic benefit index is heavily reflective of these two variables. Of the total agricultural land study area, a total area of 238,457 acres agricultural lands fall within an area of hydrologic benefit.

A total of 223,383 acres of FMMP lands coincide with areas hydrologic benefit from increased SOM , representing 66% of total FMMP lands. Notably, FMMP “Farmland of Statewide” and “of Local Importance” land classes show a total benefit area of 102,549 acres. Planned city land use projections show further increases in urban extents by 2050, with 45% of non-agricultural lands planned for commercial, industrial, or commercial land uses. As these urban areas expand, agricultural lands are increasingly at risk of conversion. Figure 17 illustrates the total area of current agricultural land that could be lost by 2050 based on planned urban development. These losses can be quantified in terms of the potential hydrologic benefit estimated for soil management on these agricultural lands that will be lost if converted to urban use. The lost potential hydrologic benefit associated with soil management on current agricultural lands spans a total area of 144,804 acres, representing a 65% loss of the total potential hydrologic benefit on current agricultural lands. Within the total area of lands at-risk of urban development, 13% are listed as protected under CCED and/or CPAD. San Diego’s agricultural community is especially sensitive to the impacts of a changing hydroclimate, making water resources a main area of focus for climate resiliency in the region. The region’s agricultural lands, and the multifaceted benefits they provide, are utilized across society. Thus, as the county continues to expand its efforts in climate mitigation, partnering with agricultural partners presents a key opportunity to ensure a resilient region . With model agreement over increased precipitation variability, and resulting changes in water availability, quality, and quantity over time, it is critical that a spectrum of strategies be implemented that can especially buffer the region’s changing water resources. As a state faced with distinct water-resource challenges, there is an increased need for planning and management decisions based at the local and regional level.

While coarse spatial resolution model projections of temperature and precipitation trends provide much of the available information for land and resource managers and climate assessments, recent modeling developments, such as the BCM, greatly enhance available data . Data on the response of these hydrologic variables presents highly valuable information on quantification of recharge, runoff, irrigation need, landscape stress, and the spatial distribution of hydrologic processes throughout a watershed . These modeling capabilities can now model the spatial distribution of hydrologic processes throughout a watershed at fine-scales, producing much needed high-resolution data and confident estimates that previous modeling lacked. There is a much needed opportunity to use these highly detailed and spatially explicit model projections for local resource management decisions and policy development .While the BCM’s advanced modeling capabilities improves the state’s understanding of hydrologic processes, soil management, and sequestration potential across the state’s terrestrial landscapes , it can also be used for informing local, regional, and water-shed specific assessments. The grid based regional water balance model can provide valuable insight on the role of precipitation in San Diego’s terrestrial ecosystems . Modeling the dynamic relationship between the pathways of precipitated water with landscape features can allow for more precise projections of both historical and future climate-hydrology assessments . Application of the BCM for San Diego provides a quantification of the benefit of carbon farming practices in both 1981-2011 assessments and future projections. This analysis exhibits that San Diego’s agricultural lands have the potential to improve hydrologic conditions with strategic management. Increases in WHC can allow for more water to stay in the watershed, maintaining base flow in low-flow periods, groundwater infiltration and recharge, while also minimizing the impacts of peak runoff during extreme precipitation events . For the unincorporated county, which shows a significant portion of the potential hydrologic benefit, soil management practices could significantly reduce water related challenges. Given that 65% of the unincorporated area is considered a groundwater-dependent area and subject to localized groundwater availability problems, practices that enhance hydrologic processes and contribute to overall water-use efficiency could greatly benefit this region. Most notably, San Diego could experience significant decreases in CWD in addition to increases of AET. As a result,best pots for blueberries the farming community could see improvements in soil moisture, irrigation costs, landscape stress, net primary productivity . Thus, these potential improvements ultimately enhance resilience to droughts and extreme events . Results further illustrate that, even in scenarios with projected climate change impacts, there are many areas throughout San Diego with potential increases in AET and decreases in CWD, if soil management practices are implemented. Projections highlight the ability of these practices to buffer the impacts of future drought conditions. Combining hydrologic benefit estimates with knowledge of existing regional agricultural lands can inform strategic, on-the-ground implementation efforts and direct carbon farming farming projects.

Existing agricultural lands constitute a large portion of the total area of estimated hydrologic benefit with increased SOM. Results can provide informed prioritization of feasible lands and management practices in addition to natural resource allocation across the region. Areas that intersect both hydrologic benefit and existing agricultural land can be used to identify areas where carbon farming efforts could be most attainable and readily employed. Translating potential hydrologic changes to their associated economic and productivity benefit provides a critical link between scientific research and practical on-farm application. Carbon farming practices aim to not only build SOC levels, but to ensure that these pools remain in the soil for many years. Thus, it is important to consider the agricultural areas vulnerable to land conversion for implementation of carbon farming projects, as these lands may not be able to sequester carbon for the long-term if converted. Areas at the intersection of current agricultural land and future urban development can be used to identify areas where demonstration sites and farming programs may be short-lived. Analyses identify the areas in which implementation of carbon farming sites and programs may not be able to yield benefits overtime, if not designated for production in years to come. With conversion of farmland in recent years and continued plans for urban development, there is a need to invest in programs that sustain the existing value of these lands while also supporting additional growth. Analyses indicate that only 13% of the total threatened agricultural lands are protected under CCED and/or CPAD conservation plans. Threatened areas showing the highest benefit values can advise future preservation strategies to target these priority lands. As the county faces demand for development, how we reconcile these pressures with the importance of agricultural lands is a critical piece to San Diego’s ultimate climate resilience. In light of these trade-offs, results such as these can help tell the story of these agricultural lands and the case for their preservation. Identifying these opportunities through scientifically based analyses helps portray the potential of carbon farming in the region, and articulate the value agricultural lands hold for their sequestration potential and co-benefits. On many fronts, California has adopted the role of a global leader in climate action, implementing an array of proactive technical instruments and political strategies ranging from the local to federal level. As agriculture is a critical backbone of the state’s booming economy, it is necessary that California put agriculture at the forefront of climate planning . Recognizing the benefits that well-managed soils provide, carbon farming has recently gained attention throughout the state as a promising form of climate adaptation and mitigation. However, for carbon sequestering practices to be effective, feasible, and widespread, it requires collaboration among interdisciplinary stakeholders. It is necessary that policy makers, environmental advocates, scientists, farmers, and economists join forces to spearhead these opportunities. Given the great diversity within the state’s 58 counties, appropriate soil management practices look different for each region. Additionally, regional climate impacts and specific areas of vulnerability differ between regions, and this may translate to unique goals. Thus, the tools and practices needed to address specific regional context, will vary. Home to the greatest number of small farms and certified organic farms of any county in the U.S. , San Diego’s agricultural setting presents unique opportunities and strengths for addressing climate challenges through widespread implementation of sustainable agriculture . Regional application of scientific tools, such as the BCM, can be used as a basis for advising interdisciplinary efforts to address specific county needs, such as water resources. While economic programs and supportive partnerships are essential for promoting the adoption of carbon farming practices, the BCM is a critical component to maximizing opportunities. With advanced science and modeling capabilities, a supportive and proactive network of entities, and the political will and economic incentives in place, opportunities for increasing carbon sequestration in California are more pertinent now than ever. The alignment of these factors makes this the opportune time for San Diego to embrace and advance powerful farming strategies.

Angular sensors can also be used in some cases to measure linear velocity

In the marketplace, people generally care more about the sensed quantity and how well the sensor performs for their specific application, while academic researchers and sensor designers are also interested in how the sensor measures the quantity. This section is concerned with the latter. The means by which a sensor makes a measurement is called the transduction mechanism. Transduction is the conversion of one source of energy to another, and all sensors utilize some form of energy transformation to make and communicate their measurements. It should be noted that this is not an exhaustive list of transduction mechanisms. This list only covers a small fraction of the many universal laws describing the conversion of one energy form to another. Rather, this list focuses on transduction principles that describe converting one energy type to electrical energy. This is because all electrical sensors must take advantage of at least one of these mechanisms, and often more. What this list does not cover is transduction from any energy type to another type other than electrical. For example, the thermal expansion principle that governs the liquid-in-glass thermometer example at the beginning of this chapter is not described,plastic plant containers because that sensor operates on the principle of converting thermal energy to gravitational energy. This list also does not include modes of biological or nuclear signal transduction mechanisms for the sake of brevity.A potentiometric sensor measures the open-circuit potential across a two-electrode device, such as the one shown in Figure 1.3C. Similar to amperometric sensors, the reference electrode provides ‘electrochemical ground’.

The second electrode is the ion-selective electrode , which is sensitive to the analyte-of-interest. The ISE is connected to a voltage sensor alongside the RE. The voltage sensor must be very sensitive and have a high input impedance, allowing only a very small current to pass. There are four possible mechanisms by which ionophores can interact with ions: dissociated ion exchange, charged carrier exchange, neutral carrier exchange, and reactive carrier exchange. Dissociated ion-exchange ionophores operate by classical ion-exchange over a phase boundary, in which hydrophilic counter-ions are completely dissociated from the ionophore’s lipophilic sites, preserving electroneutrality while allowing sites for the ions in solution to bind to. Charged-carrier ionophores bond with opposite-charged ions to make a neutrally charged molecule, and the ions with which they bond are determined by thermodynamics and the Hofmeister principle. Neutral carrier ionophores are typically macrocyclic, where many organic molecules are chained together to form a large ring-like shape whose gap is close to the molecular radius of the primary ion. Finally, reactive carrier ionophores are mechanistically similar to neutral carrier ISEs, with the only difference being that reactive carriers are based on ion-ionophore covalent bond formation while neutral carriers are based on reversible ion-ionophore electrostatic interaction. Neutral carrier and reactive carrier ion exchange both are dependent on the mobility, partition coefficients, and equilibrium constants of the ions and carriers in the membrane phase. Some examples of the chemical structures of ionophores are shown in Figure 1.4. Positional sensors are some of the most common in the world, and there are likely several within reach of you as you read this. Smartphones and wearable health devices utilize various sensors to track how many steps you take in a day, the intensity of your workouts, and what route to take home from work. Displacement, velocity, and acceleration can sometimes all be found with a single device, as each quantity is the time-derivative of the prior.

In practice, however, it is common to use separate devices for any of these three measurements because the cost of these sensors is relatively cheap, and it is easy to build systematic errors if the timing mechanism is off. The measurements for displacement, velocity, and acceleration must be made with respect to some frame of reference. For example, consider a group of people playing a game of billiards in a moving train car. Observers on the train platform would assign different velocity vectors to the balls during play than observers on the train. Displacement and angle sensors commonly use potentiometers when the value is expected to be suitably small. A potentiometer transduces linear or angular displacement to a change in electrical resistance. For a displacement sensor, a conductive wire is wrapped around a non-conductive rod, and a sliding contact is attached to the object whose displacement is being measured. A known voltage is supplied across the wound wire, and as the object moves, the sliding contact will make contact with the wound wire, shorting that part of the circuit. Then, the output voltage across the wire is measured, which will be proportional to the amount of the wire shorted by the sliding contact, which is proportional to the object’s displacement. The same principles are applied to measure the angle for a potentiometer operating in angular displacement mode. There are other methods for measuring displacement, but these methods can also be used to measure velocity, as described in the following section. Velocity measurements utilize a variety of approaches ranging from radar, laser, and sonic sensor systems. These types of sensors use one of these modulating signals to send a sound or light wave in a direction and measure the time it takes to bounce off of a surface, return to the sensor, and activate a sensing element that is sensitive to that modulating signal. Using this, the device can calculate the distance between the sensor and the reflecting object by dividing lag time by the wave speed. Then, because these devices often operate at a high frequency, the measurement can be made again, and the change in distance divided by the change in the time between measurements yields a linear velocity.In a car, for example, the speedometer is a linear velocity sensor, but it makes its measurement using an angular velocity sensor on the drive shaft and calculates the linear velocity from the assumed tire size.

Acceleration measurements are most commonly made with accelerometers. Accelerometers are most commonly MEMS devices that are extraordinarily cheap, have a low-power requirement, and utilize the capacitance transduction mechanism. The charged electrode of an interdigitated parallel-plate capacitor structure is vibrated at a high mechanical frequency. Then, when acceleration occurs, if it is perpendicular to the gap between the two capacitor plates, the force from the acceleration will cause the moving electrode of the parallel-plate capacitor to deflect towards the other plate, changing the space of the gap between the two, thereby changing the measured capacitance. The operating principle of most pressure sensors is based on the conversion of a pressure exertion on a pressure-sensitive element with a defined surface area. In response, the element is displaced or deformed. Thus, a pressure measurement may be reduced to a measurement of a displacement or a force that results from a displacement. Because of this, many pressure sensors are designed using either the capacitive or the piezoresistive transduction mechanisms. In each, a deformable membrane is suspended over an opening, such that the pressure on one side of the membrane is controlled while the pressure on the other side is the subject of the measurement. As the pressure on the measurement side changes, the membrane will deform proportionally to the difference in pressure. For a piezoresistive transducer, the membrane is designed to maximize stress at the edges, which modulates the resistance proportional to the deformation. For a capacitive transducer, the membrane is made of or modified with a conductive material, while a surface on the pressure-controlled side of the membrane is also conductive, and the pair act as a parallel-plate capacitor. Then, the membrane is designed to maximize deflection at the center of the membrane,blueberry container thereby changing the electrode gap and capacitance.Practically speaking, a sensing element does not function by itself. It is always a part of a larger ‘sensor circuit’: a circuit with other electronics, such as signal conditioning devices, micro-controllers, antennas, power electronics, displays, data storage, and more. Sensor circuits fit within the broader subject of systems engineering, which is a vast field in its own right. Figure 1.5 shows one possible sensor circuit configuration. Depending on the design of the circuit and which components are included in it, the signal that is output by the sensing element might be conditioned to the specifications of a connected micro-controller, saved onto a flash drive, shown on a display, and sent to a phone, saved on a remote server, or many other possibilities. Rather than discuss all possible sensor systems and circuit designs, we have selected the most common – and arguably most essential – components in any given sensor system and summarized them in this section.

In some form or another, all sensor circuits require power to operate. The components of a sensor circuit that generate, attenuate, or store energy to power the other circuit components are called power electronics. This may include batteries, energy harvesters, and various power conditioning devices. A sensor circuit can be made passive, where there is no energy storage within the circuit. The concept is similar to passive sensing elements described in section 1.2: passive sensor circuits use the naturally available energy to operate. This can be done if the quantity that is being measured can also be harnessed to power the device, such as light powering a photovoltaic sensing element. If there is no passive power generation, power electronics are vital for a sensing circuit’s function. This could be as simple as a coin-cell battery connected to the micro controller’s power I/O pins or as complex as a circuit with multiple energy harvesting and energy storage modalities. A sensor is not a sensor if it does not communicate its measured signal to another person or device. Communication electronics are what fulfill this function. Communication electronics can be wired or wireless. When communicating data to a person, wired communications electronics could be displays or speakers that communicate the data through images or audio. When communicating data to another computer, wired communication electronics come in the form of a ‘bus’, a catch-all term for all the hardware, wires, software, and communication protocols used between devices. At the time of this writing, wireless communications must be between the sensor circuit and another electronic device, though perhaps in future years, technology will develop a way for people to directly interface with wireless data transfer. In the meantime, wireless communications generally incorporate an antenna that attenuates an electrical signal into a directional RF frequency following one of many wireless communication protocols such as WiFi, Bluetooth, or RFID.In science and engineering, ‘error’ does not mean a mistake or blunder. Rather, it is a quantitative measurement of the inevitable uncertainty that comes with all measurements. This means errors are not mistakes; they cannot be eliminated merely by being careful. All sensors have some inherent error in their measurement. The best that one can hope for is to ensure that the errors are minimized where possible and to have a reasonable estimate of the magnitude of the error. One of the best ways to assess the reliability of a measurement is to perform it several times and consider the different values obtained. Experience has shown that no measurement – no matter how carefully it is made – will obtain the same values. Error analysis is the study and evaluation of uncertainty in a measurement. Uncertainties can be classified into two groups: random errors and systematic errors. Figure 1.8 highlights these two types of error using a dartboard example. Systematic errors always push the measured results in a single direction, while random errors are equally likely to push the results in any direction. Consider trying to time an event with a stopwatch: one source of error will be the reaction time of the user starting and stopping the watch. The user may delay more in starting the stopwatch, thereby underestimating the duration of the event, but they are equally likely to delay more in stopping the stopwatch, resulting in an overestimate of the event. This is an example of random uncertainty. Consider if the stopwatch consistently runs slow – in this case, all events will be underestimated. This is an example of systematic uncertainty. Systematic errors are hard to evaluate and sometimes even difficult to detect. However, the use of statistics gives a reliable estimate of random error. In the kingdom of electronics, silicon reigns.

The comparable average for California was 7.4 percent of net farm income

An additional way to indicate the relative independence of California agriculture from direct government payments is to look at the share of net farm income made up of direct government payments. Over the period 1990–2000, direct government payments to U.S. producers were 28.3 percent of net farm income.Figure 20 shows annual ratios over the period 1960–2000.15 Direct government payments constituted 49 percent of U.S. net farm income in 2000 and 12 percent of California net farm income. Direct government payments increase the fixed cost of agricultural production without any corresponding increases in productivity .16 In the U.S. heartland , direct government payments account for nearly a quarter of the value of farmland . A recent study of soybean production in Argentina and Brazil concluded that production costs were 20 to 25 percent lower than in the U.S. heartland even though variable input costs per acre were lower in the U.S. . Annual land costs were as much as $80 per acre higher in the U.S. Thus, higher capitalized asset values affect competitiveness. California agriculture is more flexible and more responsive to changes in market conditions with its managerial ability to meet market driven domestic and worldwide consumer demands. Part of that flexibility and responsiveness comes from less reliance on direct government payments. Bottom Line: California agriculture is growing more rapidly than U.S. agriculture, is more flexible in selecting production alternatives, is more responsive to market driven demand signals,plastic pots for planting and is significantly less vulnerable to federal budget cuts. Every one of these attributes is a plus.

In the 21st Century, the three most important markets for California agriculture will be California, the United States, and higher-income, developing countries. All will continue to experience significant population growth . While projected growth in California to 2040 will not be as rapid as in the last 40 years , it will still be substantial—an increase of more than 24 million customers compared to a smaller increase in the preceding 40-year period. For the U.S. market, projected growth is slightly higher in the next 40 years . Most important, U.S. growth represents an increase of an additional 105 million customers, a larger growth increment than for the preceding 40-year period. As noted earlier, global population will increase by around 2.8 billion people with the majority residing in developing countries. A further plus is that their incomes should also be growing rapidly. Bottom Line: California agriculture is well positioned to take advantage of continued growth in state, national, and global population with parallel growth in incomes.California agriculture has always been vulnerable to its external environment precisely because it is demand-driven. Given that it produces predominantly income-sensitive products, growth, recession, depression, and global economic events all potentially cause significant changes in prices. This fact, coupled with a rising share of California output being perennial crops and livestock, means that the potential for boom or bust cycles is probably rising. Thus, the operative question is whether the external environment is becoming more volatile with increased global interdependence along with the rising dependence of all nations on trade. Leaving aside war and massive natural disasters , lowered trade barriers and freely functioning financial markets should increase international market stability compared to a world of protection and controlled financial flows. On the other hand, it is less and less possible for nations to isolate themselves from international economic events.

Bottom Line: While there is no strong evidence that global markets are becoming less stable, it is possible that, as individual countries liberalize, domestic price instability could increase, presenting additional challenges to farmers, growers, and ranchers.California agriculture grew very rapidly over the past half-century. Real value of production increased 70-fold. Agricultural production is now widely diversified to more than 350 commercial plant and animal products, exhibiting a constantly shifting composition and changes in the location of production, all abetted by growing demands for its products and rapid science-based technological changes. California agriculture is strongly buffeted by growing urban pressures for availability of key natural resources—reliable water supplies and productive land. Relentless pressure from environmental and other non-agricultural interests remain with respect to water quality, chemical contamination, air pollution, wildlife and aquatic habitats, and worker safety in the forefront. Agricultural prices clearly became more volatile after the global instability of the early 1970s. As agriculture became more complex internally, both technically and economically, it also became more interdependent with the rest of the economy and the world. It now purchases virtually all of its variable inputs from the non-agricultural economy and has a massive need for credit—short-term, long-term, and, increasingly, intermediate credit. It has probably become more export dependent despite the enormous growth of the California consumer market. In sum, it is more dynamic, more complex, more unstable, and more diverse, thus making California agriculture more vulnerable to external events. At many critical points in California history, California agriculture has been written off, but these periods of difficulty have been interspersed with more numerous periods of explosive growth . The share of perennials, or multiyear-production-cycle products, increased as California agriculture moved away from production of annual field crops and canning vegetables and shifted toward tree nuts, fresh fruits, and wine grapes. The frequency and amplitude of product price cycles seemed to increase. For example, an overabundance of average-quality wine grapes is occurring as recent plantings have come to harvest maturity.

There have been cycles in other products, such as prunes, cling stone peaches, and raisin grapes. The first years of the 21st Century are only the second time in history that low prices occurred across the entire product spectrum. The first was during the long-lasting Great Depression. But already in 2003 and at the beginning of 2004 there are signs of improvement in some prices, promising an improved economy.The idea of creating a new generation of agricultural system data, models and knowledge products is motived by the convergence of several powerful forces. First, there is an emerging consensus that a sustainable and more productive agriculture is needed that can meet the local, regional and global food security challenges of the 21st century. This consensus implies there would be value in new and improved tools that can be used to assess the sustainability of current and prospective systems, design more sustainable systems, and manage systems sustainably. These distinct but inter-related challenges in turn create a demand for advances in analytical capabilities and data. Second, there is a large and growing foundation of knowledge about the processes driving agricultural systems on which to build a new generation of models . Third, rapid advances in data acquisition and management, modeling, computation power, and information technology provide the opportunity to harness this knowledge in new and powerful ways to achieve more productive and sustainable agricultural systems . Our vision for the new generation of agricultural systems models is to accelerate progress towards the goal of meeting global food security challenges sustainably. But to be a useful part of this process of agricultural innovation, our assessment is that the community of agricultural system modelers cannot continue with business as usual. In this paper and the companion paper on information technology and data systems by Janssen et al. , we employ the Use Cases presented in Antle et al. , and our collective experiences with agricultural systems, data, and modeling, to describe the features that we think the new generation of models, data and knowledge products need to fulfill this vision. A key innovation of the new generation of models that we foresee is their linkage to a suite of knowledge products – which could take the form of new, user-friendly analytical tools and mobile technology “apps” – that would enable the use of the models and their outputs by a much more diverse set of stakeholders than is now possible. Because this new generation of agricultural models would represent a major departure from the current generation of models,plant pot drainage we call these new models and knowledge products “second generation” or NextGen. We organize this paper as follows. First, we discuss new approaches that could be used to advance model development that go beyond the ways that first generation models were developed, and in particular, the idea of creating a more collaborative “pre-competitive space” for model development and improvement, as well as a “competitive space” for knowledge product development. Then we describe some of the potential advances that we envisage for the components of NextGen models and their integration. We also discuss possible advances in model evaluation and strategies for model improvement, an important part of the approach. Finally, we discuss how these ideas can be moved from concept to implementation.A first step towards realizing the potential for agricultural systems models is to recognize that most work has been carried out by scientists in research or academic institutions, and thus motivated by research and academic considerations more than user needs.

A major challenge for the development of a new generation of models that is designed to address user needs, therefore, is to turn the model development process “on its head” by starting with user needs and working back to the models and data needed to quantify relevant model outputs. The NextGen Use Cases presented in Antle et al. show that most users need whole-farm models, and particularly for smallholder farms in the developing world, models are needed that take into account interactions among multiple crops and often livestock. Yet, many agricultural systems models represent only single crops and have limited capability to simulate inter-cropping or crop-livestock interactions. Why? One explanation is that many models were developed in the more industrialized parts of the world where major commodity crops are produced. Another explanation is that models of single crops are easier to create, require less computational resources, and are driven by a smaller set of data than models of crop rotations, inter-crops or crop-livestock systems. Additionally, researchers are responding to the incentives of scientific institutions that reward advances in science, and funding sources that are more likely to support disciplinary science. Component processes within single crops, or single economic outcomes, are more easily studied in a laboratory or institutional setting, and may result in more publishable findings. Producing useful decision tools for farmers or policy decision-makers is at best a secondary consideration in many academic settings. The need for more integrated, farming-system models has been recognized by many researchers for several decades, for example, to carry out analysis of the trade offs encountered in attempts to improve the sustainability of agricultural systems . For example, Antle and Capalbo and Stoorvogel et al. proposed methods for linking econometrically estimated economic simulation models with biophysical crop simulation and environmental process models. Giller et al. describe a complex bio-physical farming system modeling approach, and van Wijk et al. review the large number of studies that have coupled bio-physical and economic models of various types for farm-level or landscape-scale analysis. More recent work by AgMIP has developed software tools to enable landscape-scale implementation of crop and livestock simulation models so that they can be linked to farm survey data and economic models . While these examples show that progress has been made in more comprehensive, integrative approaches to agricultural system modeling, these modeling approaches are more complex and have high data demands, thus raising further challenges to both model developers and potential users. As we discuss below, methods such as modularization may make it possible to increase model complexity while having models that are relatively easy to understand and use. Other methods, such as matching the degree of model complexity to temporal and spatial scales, also can be used. Section 3.8 further discusses issues of model complexity and scale. While it is clear that model development needs to be better linked to user needs, it is also important to recognize that science informs stakeholders about what may be important and possible. Who imagined even a few years ago that agricultural decision support tools would use data collected by unmanned aerial vehicles linked to agricultural systems simulation models? So while model and data development need to be driven by user-defined needs, they must also be forward-looking, using the best science and the imaginations of creative scientists.As Jones et al. describe in their paper on the historical development of agricultural systems models, existing models evolved from academic agronomic research.

Consolidation reduced the number of input suppliers available to growers

Events were gradually righting a badly tossed sector when the 1987–1992 drought appeared on the horizon to ultimately affect all California agriculture. This was yet another severe shock to the system. In particular, the west side of the San Joaquin Valley was pummeled by a nexus of water issues, e.g., reduced water supplies, inadequate off-farm drainage, and rising water tables, extending through the decade of the 1990s. Selenium toxicity in the Kesterson Wildlife Refuge was a harbinger of future environmental challenges. The 1990s Two additional early-decade shocks would impact agriculture in the 1990s. A four-year recession softened domestic demands and affected capital markets. The CVPIA in 1992 abruptly changed the political economy of federal water availabilities, curtailing water deliveries south of the Delta. Farms on the west side of the San Joaquin Valley were impacted financially as water became at once more expensive and scarcer because of both drought and regulatory change or, as some saw it, because of a combination of natural and regulatory droughts. Financially leveraged farms again faced foreclosure pressure. Lending institutions this time were quicker to secure and dispose of foreclosed assets. Quick disposal depressed the land market and the value of collateral assets to the chagrin of marginally solvent producers and firms. Weakening of Japanese and Asian economies again affected U.S. commodity exports. However, California’s specialty-crop exports were impacted to a lesser extent, and nut crops and grapes in particular enjoyed more favorable markets and prices. Large investments again appeared for perennial crops from investors and from growers seeking to broaden production portfolios to include higher-grossing crops.

Ample farmland was still available for these higher and better uses relative to production of field crops,drainage for plants in pots which was still plagued by the low prices of the early 1990s. By mid-decade, export markets were again strong, including those for basic field-crop commodities. In the main, prices strengthened for the products of California’s agricultural sector through 1996–97, with variations from commodity to commodity . Low interest rates continued to feed investments in permanent plantings. Producers of basic commodities enjoyed high export demand when new federal farm legislation was put in place in 1996. In the first year of the farm program, producers enjoyed healthy market prices and decoupled farm-program payments, but shortly thereafter economic fortunes again reversed. Within a couple of years, world economies again softened and farm prices were low across a wide spectrum of both basic and specialty commodities—and the domestic economy also faltered. An ex post doubling of federal program payments sought to shore up basic commodity producers. Acreage remained in production despite low prices. The ups and downs of the 1990s were also marked by significant structural change. Brand-name fruit and vegetable processors closed processing facilities. Other processing outlets disappeared. The bankruptcy of Tri Valley Growers in 1999 had a disastrous effect on producers already at the margin.Increased buyer concentration in fresh produce squeezed out many grower/shippers, placing more reliance on large firms capable of supplying customer needs on a year-round basis. With widespread and rapid changes in the competitive environment, product prices fell while production costs continued to rise, further squeezing production agriculture.

Contractual arrangements became increasingly critical to preserve shrinking margins. Some growers countered by integrating processing and marketing activities. Even though farm financial advisors had been more temperate regarding increasing debt loads, many growers and agribusiness firms experienced difficulty in continuing their farming operations.At the century’s end, California’s agricultural producers once again were seeking to stay upright while searching to reright their economic fortunes. The industry had witnessed significant change over the preceding three decades. The sector was more diverse in production and less dependent on field-crop and livestock production than in 1970. Contractual marketing arrangements for agricultural production were now the norm in this new, higher-valued production system, changing marketing channels and risk exposures of producers and contracting firms. Field crops, livestock, and livestock products contributed less than 20 percent to agricultural markets in 2000 whereas specialty crops now dominated—28 percent fruit and nut crops, 26 percent vegetables, and 11 percent nursery and greenhouse products. Dairy products alone contributed nearly 15 percent of the value of agricultural products sold in 2000. The sector was also more export-oriented. Despite a drop of 5 percent below peak levels in 1997, the value of California agricultural exports amounted to $6.6 billion in 2000. Agricultural commodities with ratios of farm quantity exported to farm quantity produced of 20 percent or more in 2000 included cotton lint ; almonds ; walnuts ; prunes ; dry beans ; grapefruit ; plums and rice ; apples, apricots, and onions ; oranges ; broccoli and fresh tomatoes ; dates and pistachios ; asparagus and cherries ; and cauliflower .

Competitive pressures increased for water resources throughout the state and for land in some areas, particularly in the northern San Joaquin and southern Sacramento Valleys. Environmental issues continued to command attention with more emphasis on in-stream water use, dairy-waste management, new chemical standards, water quality, and particulate matter concerns. With ample field-crop land and increased permanent plantings, values for open agricultural land for agricultural uses have remained relatively stable over the past decade. The major exceptions include varietal wine-grape vineyards in premium coastal areas, irrigated vegetable land on the south and central coast, and dependably watered, developable land in the San Joaquin Valley. The two dominant underlying forces affecting regional shifts in the location of agricultural production have been population growth and water-supply conditions. Rapid postwar and continuing urban and suburban population expansions forced relocation to interior valleys, first from the Los Angeles basin and later from the Central Coast and San Francisco Bay Area.A fuller appreciation of changes of the recent half century is the immediate precursor to an examination of the state of California agriculture as the industry enters the 21st Century. We first review the changing character of California agriculture from 1950 to 2000, focusing on major shifts in the structure of production , commodity composition, and geographic distribution. We then document the increasing importance of exports, followed by statistical information and financial indicators comparing California and aggregate national agriculture with respect to farm numbers, land in farms, farm real estate values, farm income, and selected financial ratios.Without doubt, the most significant structural changes of the half century were those that followed the addition of two major water projects that came online in this period. Together, the federal CVP and the California SWPT brought more than three million additional acres under irrigation. As shown in Figure 2, irrigated acreage grew from 4.3 million acres prior to WWII to 6.4 million at the start of the 1950s. Expansion, mostly from CVP supplies, increased irrigated acreage to 7.4 million in 1959 and subsequent increases, mostly from SWP deliveries, yielded 8.5 million acres in 1978. The most recent census indicated that there were 8.7 million acres of irrigated land in 1997.Expansion in irrigated production capacity plus rapid increases in productivity allowed California agriculture to experience very rapid growth in output at good prices until the early 1990s. Demand growth fueled by rising incomes and population growth kept California agriculture on a steep growth path. In constant 1996 dollars, the market value of agricultural products sold grew from $400 million in 1950 to nearly $27 billion in 1997 . The upward trend in the real value of agricultural production was tempered by short periods of decline—in the mid- 1970s and early 1980s and by economic recessions in the early 1990s and again at the end of that decade. However,30 litre pot within that overall picture of growth, there were significant changes in the composition of output, the importance of particular commodities, and the geographic location of production.The shares of the value of agricultural product sales coming from plant and animal products changed persistently over the past 50 years.

As shown in Figure 4, crops made up 61 percent of sales in 1950 while livestock accounted for 39 percent.The shares remained relatively constant throughout the 1950s and 1960s with expansions both in crop production and livestock production . However, livestock shares then fell steadily so that in 2000 three-quarters of the value of California production came from plant production and only onequarter from livestock. The crop share in California was much higher than the U.S. average of roughly 50/50 and significantly different from European agriculture, where animal products generated approximately two-thirds of sales. Additionally, these broad trends hide significant changes that occurred within both the plant and livestock production categories. Figure 5 shows the shares of crop production made up by major crop categories: field crops ; fruits, nuts, and berries; vegetables and melons; and nursery and greenhouse products. Over 50 years, the field-crop share of total crop production fell steadily, dropping from 33 percent of value in 1950 to less than 10 percent in 2000. The share of intensive agricultural crops rose from 63 percent in 1950 to 77 percent of total crop products by 2000. Growth was most pronounced in nursery products . These latter trends no doubt reflected the shift in the preference of consumers with rising incomes toward fresh products, and phenomenal growth in urban populations. Shares also shifted significantly within the livestock sector. In 1950 poultry and poultry products made up about 23 percent of the value of production, dairy products constituted 26 percent, and meat animals represented 42 percent . Over the 50-year period, poultry’s share declined gradually to 16 percent. Cattle and calves increased very rapidly in the 1950s and 1960s as the large-scale feedlot boom hit California, rising to 49 percent of livestock value in 1970. Thereafter, the share of the beef industry steadily declined, approaching 20 percent of value in 2000. The value of dairy production approached 60 percent of total livestock production in 2000, doubling in importance from shares of 30 percent or less in the period 1950 to 1970. We attempt to explain some of the causes of these shifts in industry composition in the sections that follow.At the aggregate level, California agriculture seems to be fairly stable and growing rapidly ; but beneath the surface it is a caldron of perpetual change. Here, we look briefly at what commodities are important, followed in the next section by a discussion of where they are produced. Table 1 attempts to capture the dynamics of an ever changing commodity composition. Part A presents the top ten commodities in 1950 and what happened to their rankings over the next 50 years, and Part B presents the top ten commodities in 2000 and how their rankings changed over the past 50 years. Several trends stand out in Part B. Dairy has clearly supplanted beef as the number-one commodity and now holds a commanding lead over the second-ranked commodity, grapes. Cattle and calves, ranked first from 1950 to 1970, were ranked fifth in 2000. Field crops’ role in the top ten declined in relative importance. In 1950 four of the top ten were field crops —cotton , hay , barley , and potatoes . In 2000 only two field crops remained in the top ten —cotton and hay . Nursery products and flowers and foliage have come from relative insignificance to number three and number seven, respectively. Overall, products sensitive to rising incomes have grown in importance—grapes , nursery products, flowers, lettuce, strawberries, and almonds make up six of the top ten.The share of the total value of production accounted for by the top ten commodities has fallen, reflecting a much wider spectrum of high-valued commodities produced on California farms and ranches. The top ten commodities accounted for 66 percent of the total value of agricultural production in 1950 but only 61 percent in 2000.The majority of agricultural production takes place in just four of the eight agricultural production regions of California : Region 4 , Region 5 , Region 6 , and Region 8 .8 Major shifts of production among regions reflect progressively increasing demands for California products for both domestic and export markets, withdrawal of land from agricultural production because of population growth in temperate coastal areas , growth in higher-valued perennial and vegetable production displacing field-crop acreage in interior areas, and shifts within the Central Valley induced by surface-water deliveries.

Social and demographic characteristics for exam takers are not available

Some ordinances also provide procedures for handling formal complaints by neighbors. Most California counties and a number of cities now have right-to-farm ordinances, a popularity seemingly driven by the belief on the part of local officials and others that this is an easy way to provide farmland protection that avoids hard political choices. Because they are not regulatory tools and rely primarily on the dissemination of information, however, the ordinances lack teeth and legal effect. It is uncertain to what extent they have reduced conflicts in edge areas. But the ordinances do serve a useful purpose, according to many agricultural leaders and county officials, in educating residents and asserting as a policy matter the value of agriculture in particular communities . More generally, conflicts between farmers and urban neighbors over farm activities can be addressed by a variety of techniques for dealing with community-level disputes. Practitioners in this field make a distinction between conflict resolution and conflict prevention. Resolution processes often involve a form of third party mediation, in which facilitators get both sides together, factual information on the source and elements of the dispute is developed, alternatives are deliberated,planting in pots ideas and an effort is made to reach an agreement among the parties as to actions to be taken such as changes in farm management . The state of New York has formalized such processes, with a Community Dispute Resolution Center in each county with resources for dealing with edge and other local conflicts .

Preventing edge conflicts typically involves less formal methods, with the emphasis on encouraging farm operators to maintain open lines of communication with their urban neighbors. The assumption is that friendly relations can head off serious disputes in the future over specific matters. One piece of advice to farmers in a New York state guidebook on reducing edge conflicts is to notify neighbors in advance of the timing and need for particular practices that may generate negative impacts. The guidebook goes further to suggest 15 strategies that farmers can use to foster good neighbor relations, including farm tours, providing gifts of farm produce, and setting aside an acre or two for wildlife .Given the substantial returns to higher education in this setting , this is a very high stakes exam. Every year, approximately 9 million students in China take the exam to compete for admission to approximately 2,300 colleges and universities. The NCEE has two primary tracks: the arts track and the science track.All students are tested on three compulsory subjects regardless of track: Chinese, mathematics, and English, with each worth 150 points. Students in the arts track take an additional combined test that includes history, politics, and geography worth 300 points, while students in the science track take an additional combined test that includes physics, chemistry, and biology worth 300 points. Thus, regardless of track, the maximum achievable score for each student is 750 points. In our focal provinces, the Chinese and math exams are scheduled for 9– 11:30am and 3–5pm on June 7th, and the English and track test are scheduled for 9– 11:30am and 3–5pm on June 8th.Since provinces have some discretion in the design of their tests, exam difficulty can vary by track, province, and year. Our core analysis deploys province-by-year-by-track fixed effects to account for this possibility. The NCEE tests are graded one to two weeks after the exams are completed by professionals in hotels in each of the respective provincial capitals. Since this grading occurs in locations that differ from test takers in terms of both space and time, we are confident that the effect we estimate on NCEE scores is not the result of any potential impacts on graders. The NCEE data were obtained from the China Institute for Educational Finance Research at Peking University. This dataset provides a unique identifier and the total test score for the universe of students enrolled in a Chinese institution of higher education during our study period.

The dataset also reports the subject specialization for each student, allowing us to explore heterogeneity across the science and art tracks.Importantly, the student ID contains a six-digit code for county of residence, which allows us to match students to the county administrative centers. Testing facilities are located in local schools which are universally very close to county administrative center. 7 Therefore, we use the county administrative center to approximate the testing facilities. The information on which testing facility a student is assigned is unavailable. Our core analytic sample includes observations from approximately 1.3 million students. We supplement this dataset with data on the cutoff scores that determine admission eligibility to the elite universities in order to separately examine the impacts at the upper-end of the performance distribution.Data on daily agricultural fires are collected from two satellites named TERRA and AQUA, which rely upon Moderate Resolution Imaging Spectroradiometer sensors to infer ground-level fire activity. The satellites overpass China four times a day , and report all fire points detected with 1-km resolution . The fires are detected based on thermal anomalies, surface reflectance, and land use . Since the size of a fire cannot reliably be inferred from satellite data , we treat fires in adjacent pixels as distinct fires. We exploit data on fire radiative power, a measure of fire intensity, to at least partially probe the importance of this assumption. A fire is linked to NCEE performance within a county if it occurs within a 50- km of the county administrative center during the two-day exam period in each year. Alternative distances are explored as part of our robustness analyses. Since proximity to a fire is likely correlated with the economic benefits as well as the environmental harms from fires, we eschew distance-weighting strategies on fires in our core analysis. These are, nonetheless, explored in our robustness checks. Meteorological data is important for two reasons. First, as detailed in the next section, we exploit detailed data on wind direction to contrast impacts of those upwind and downwind of a given fire. Second, weather may also confound the interpretation of our results since the incidence of agricultural fires may be correlated with meteorological conditions. Our weather data are obtained from the National Oceanic and Atmospheric Administration of the United States.

We collect daily average weather data on temperature, precipitation, dew point, wind speed, wind direction and atmospheric pressure from 44 local weather stations during our sample period. Daily average wind direction is reported based on the hourly wind direction and wind speed through vector decomposition .8 Given the sensitivity of wind direction to topography and other quite localized factors, we assign wind to test locations based on monitor data from the source closest to the county administrative center, and drop counties with no wind stations within 50 km.9 We extract other weather data during the exam time and then convert from station to county using the inverse-distance weighting method . The basic algorithm calculates weather for a given site based on a weighted average of all station observations within a 50-km radius of the county center, where the weights are the inverse distance between the weather station and the county administrative center. While the detrimental impacts of agricultural fires on air quality have been documented in the environmental science literature,growing blueberries in pots data availability does not allow us to make this link explicitly in our setting. Ground monitoring pollution data at the station-day level in China is not available prior to 2011, and there are infamous stories of data manipulation of the Air Pollution Index and PM10 in China apply to the period prior to 2013 .10 In addition, satellite data is not well suited for ground-level measurement at fine temporal and spatial scales required for our analyses, especially during burning seasons with smoke plumes . Nonetheless, we provide a first-stage estimation, of sorts, by estimating the relationship between air pollution and agricultural fires using data from a more recent period: 2013–2016. Since NCEE data is not available for this period, we view this analysis as one designed to shed light on the mechanisms through which agricultural fires might impact cognitive performance. Daily pollution data are obtained from the China National Environmental Monitoring Center , which is affiliated with the Ministry of Environmental Protection of China. Monitoring stations report data for the six major air pollutants – particulate matter less than 10 microns in diameter , particulate matter less than 2.5 microns in diameter , sulfur dioxide, nitrogen dioxide, ozone, and carbon monoxide – that are used to construct the daily Air Quality Index in China. For each pollutant, we construct a two-day average concentration level, corresponding to the length of the exam period. Fires that took place more than 50 km from a county center are excluded from this analysis. We select all pollution monitoring stations within 50 km from a county administrative center and calculate the pollution level at the center using the IDW method. Our analysis relies on data from 212 distinct pollution monitors, with an average distance of 24.5 km. In this section, we explore the heterogeneity of our core results along two dimensions, as shown in Table 3. The first column simply reproduces the results from our preferred specification for our primary results .

Columns and of Table 3 explore heterogeneity along another dimension: the subject track. It appears that the impacts are negative and highly statistically significant for those in the science track while only marginally significant for those in the arts track. This may reflect the differential sensitivity of the prefrontal cortex – the part of the brain responsible for more mathematical style reasoning, and is consistent with other evidence on the impacts of environmental stressors on cognitive performance . This pattern of results might also, at least partly, be driven by the gender composition of students across tracks. While we do not have individual level gender data, the male ratio is typically much higher in science track than arts track and other work has found the cognitive performance of males to be more sensitive to PM pollution than females . The next four columns of Table 3 examine how the impacts of agricultural fires vary across the student ability distribution by estimating Equation using a quantile regression approach. This regression is especially important for two reasons. First, since we only observe NCEE scores for students that were eventually admitted to an institution of higher learning, we might be worried about sample selection resulting from negative effects at the lower end of the ability distribution. Second, differences in impacts across the ability distribution could have profound long-run impacts on income inequality given the highly nonlinear returns to scores. Our results find no impacts among low ability students, thus minimizing concerns about selection bias. Moreover, the impacts appear to be concentrated near the very top of the performance distribution – above the 75th percentile. This can be seen most clearly in Figure 5, which further breaks down estimates by decile. Column offers another perspective on the higher end of the ability distribution by focusing on the impacts of agricultural fires on the likelihood of admission into an elite university in China based on the cutoff scores that govern that process. The cutoff score in each province is the lowest score of students admitted to the first-tier universities in China. It is determined by the admission quota of each university and the ranking of student scores in each province. Upwind fires continue to have a significant negative impact on test performance. A one percentage point increase in the difference between upwind and downwind fires, decreases the probability of admission to an elite university by 0.027 percent . Given the sizable impacts of an elite education in China on lifetime earnings , these impacts should be viewed as economically meaningful, even if they may be largely re-distributional by privileging the admission of students from less exposed counties over those from more exposed ones. In this section, we provide a number of robustness checks. We begin by exploring alternative ways to assign the exposure of test takers to agricultural fires. The first column of Table 4 reproduces our main results, which limit our focus to fires within 50 km of a testing center.

The solids were then quickly centrifuged and excess solvent was decanted to avoid moisture uptake

Dry trehalose was then added and completely dissolved. Finally, 4- vinylbenzyl chloride was added dropwise. The reaction was stirred for 22 hours at 22 °C. The reaction mixture was then precipitated into a rapidly stirring solution of methylene chloride and hexanes . The solids were collected by vacuum filtration through a sintered glass funnel equipped with a filter flask.Solvents were further removed in vacuo over 10 hours and then solids were broken up with a spatula to increase surface area and make drying more efficient. The solids were dried for an additional 24 hours before being used for gelation without further purification. To synthesize the gels, the crude trehalose monomer and cross-linker mixture was completely dissolved in Milli Q water . To this, tetramethylethylenediamine was added. This mixture and a 10 mg mL-1 stock solution of ammonium persulfate in Milli Q water were separately degassed for 30 minutes by sparging with argon. Under an inert atmosphere of argon gas, the APS solution was added to the crude styrenyl-trehalose solids and TEMED for a final ratio of 1 g crude material for every 1 mL Milli Q water, 5 µL TEMED, and 250 µL APS solution . The solution was gently shaken for 12 hours to form a gel. The crude gel was washed with a Soxhlet extractor for three days with deionized water to removed unreacted monomers, crosslinkers, and other impurities,fabrica de macetas plasticas providing a clear gel. The yields of gelations were based on comparing the moles of limiting reagent, 4-vinylbenzyl chloride, to the moles of final product.

The final product molecular weight was calculated based on the molecular weights of each of the individual components of the crude mixture, and the distribution of these products was determined by LCMS . Finally, the gel was lyophilized and then ground to produce a fine, white powder. The overall yield of the two-step synthesis was 87.5%, providing 155.9 mg of gel. The synthesis was increased 100-fold and carried out as outlined above with the following exceptions: the monomer/cross-linker reaction was stirred for 46 hours instead of 22 hours as a longer reaction time was required to have sufficient styrene-functionalization on trehalose, the reaction was then precipitated in 100 mL aliquots into DCM and hexanes at an approximate rate of 150 mL per minute while the suspension stirred at 800 rpm, and the final crude gel was washed for 7 days with deionized water in a Soxhlet extractor. The reaction gave an overall yield of 75.6%, providing 81.02 g of monomer/cross linker and 13.8 g of gel. Gelation was confirmed by examining physical properties of the gels, e.g. storage and loss modulus as well as swelling ratio. Storage and loss modulus were measured by trimming hydrogels to 8-mm diameters to match the top parallel plate geometry and with an applied constant strain of 1% and angular frequency range of 0.1 to 10 rad/s at 22 °C. Swelling ratio was determined by swelling completely dried hydrogels in Milli Q water over 72 hours and calculating the mass ratio between the swollen gels and their initial dry weights. All physical properties are displayed as the average and standard deviations of three independent hydrogel measurements.Phytase activity was measured by modifying a previously reported method.Phytase stock solution was added to trehalose gels and prepared as described above. Heated and control hydrogels were removed by centrifugation after addition of sodium acetate buffer and incubation.

Supernatant was added to 1 mL of 0.2 M sodium citrate buffer, pH 5.5. Aliquots were transferred to Lobind Eppendorf tubes. To all sample tubes, 10 µL of 1% phytic acid was added. The reactions were then incubated at 37 °C for 15 minutes before quenching with 15% trichloroacetic acid and then diluted ten-fold with Milli Q water . Aliquots were transferred to a 96-well plate and then diluted with a 1:3:1 solution of 2.5% ammonium molybdate , 10% sulfuric acid , and 10% ascorbic acid . The plate was covered with parafilm and then incubated at 50 °C in a water bath for 15 minutes, cooled at 4 °C for 15 minutes, and absorbance measurements were taken at 820 nm. Phytase activity was defined as the quantity of enzyme that catalyzes the liberation of 1.0 µmol of inorganic phosphate from 1 % phytic acid per minute at 37 °C and pH 5.5. Assay was run in triplicate. Note that generally this assay is difficult to reproduce due to the fast reaction between phytase and phytic acid. We advise that the assay be done as quickly as possible, using a multi-pipetter.Stock solution of b-glucanase was added to trehalose gels and prepared as described above. Heated and control hydrogels were centrifuged after addition of sodium acetate buffer and incubation. Supernatant was pre-warmed along with azoBarley glucan substrate provided in Megazyme assay kit at 30 °C for five minutes. Due to the viscous nature of the glucan substrate, it was transferred using a positive displacement pipet. Aliquots of the supernatant was added to azo-Barley glucan substrate and then mixed vigorously before incubating at 30 °C for 10 minutes. Precipitation solution was made by dissolving sodium acetate and zinc acetate in distilled water . The pH was then adjusted to 5.0 with concentrated hydrochloric acid, and the volume was adjusted to 200.0 mL. Finally, 2-methoxyethanol was added. An aliquot of this precipitation solution was added to each sample, and the contents were mixed vigorously, incubated at ambient conditions for five minutes, and then mixed vigorously again.

Finally, the samples were centrifuged at 6,000 rpm for 10 minutes, supernatant was added to a 96-well plate, and the absorbance was read at 590 nm. b-Glucanase activity was defined as the quantity of enzyme that catalyzes the liberation of 1.0 µmol of glucose reducing sugar equivalent from azo-Barley glucan substrate per minute at 37 °C and pH 4.6. Experiments were repeated in triplicate.Note that all LB media used throughout these studies contained 50 µg/mL kanamycin to prevent other strains of bacteria from growing. A colony of kanamycin-resistant strain of BL21 E. Coli bacteria was grown in 50 mL LB media in a 250 mL sterilized Erlenmeyer flask at 37 °C and 200 rpm. At an OD600 of 0.426, the bacteria was diluted in 50 mL LB media and incubated at 37 °C and 200 rpm for an additional 1.5 hours. The bacteria was diluted 1:1 in LB media containing P3, free trehalose or no excipient . The samples were frozen and lyophilized for 24 hours. Following lyophilization stress, 200 µL of LB media was added to each condition. Aliquots of 150 µL were added to 3 mL of fresh LB media in culture tubes and incubated at 37 °C and 200 rpm. Cell growth was monitored by measuring the absorbance at 600 nm.As drought frequency, severity, and duration are exacerbated by climate change,improving the efficiency of water resources is crucial for a sustainable future. Drought affects agriculture globally and poorly affects food security, water availability, and rural livelihoods. In the developing world alone, drought caused $29 billion agriculture revenue loss between 2005 and 2015.Drought cannot be avoided, but mitigation practices can negate its deleterious effects. In particular, drought reduces crop productivity due to high temperatures and limited water,but on-farm water and soil management have proven successful in abating these issues. Despite this, many inefficient practices, such as flood irrigation, are still widely applied.Technologies that prevent agricultural water wastage must be developed and implemented to improve the health of crops subjected to drought. Hydrogels are hydrophilic polymeric materials capable of absorbing and releasing water many times their weight.In soil, swollen hydrogels act as water reservoirs by slowly releasing captured water through a diffusion-driven mechanism that arises from humidity variation between the internal environment of the material and the soil surrounding it. Hydrogels have been mixed into soil to prevent water irrigation loss caused by drainage and evaporation. They also offer a potential scaffold for controlled release of nutrients,and provide better oxygenation to plant roots by increasing soil porosity. By improving the water holding capacity of soil and water available to plant roots, hydrogels have demonstrated the ability to increase plant survival rate, water use efficiency, and growth.While superabsorbent polyacrylate gels have demonstrated success as soil conditioners,precio de macetas de plastico it is hypothesized that anionic moieties within hydrogels create electrostatic repulsions with negative charges on the surface of soil particles.The anion-anion repulsive forces can reduce adsorption of the hydrogel to soil and therefore allow the polymer to be leached by water over time. The development of alternative hydrophilic gels for soil conditioning could help overcome these issues and potentially demonstrate other advantages. The Maynard lab has designed a scalable, two-step synthesis of a trehalose-based hydrogel for the thermal stabilization of enzymes.The synthesis yield was greatly improved from 17 % to 88 %, scaled 100-fold while retaining a high yield at 76 %, and was optimized to eliminate the use of halogenated and toxic solvents .

This multi-gram, green synthesis makes the gel more practical for agricultural applications where materials need to be cost-efficient and scalable.20 Moreover, trehalose has been shown to stabilize desiccant-intolerant soil bacteria necessary for plant growth.As such, trehalose hydrogels have great potential for water management as well as stabilization and delivery of plant nutrients while being beneficial to soil. Here, two hydrogels, a commercially available poly-based gel, Terra-sorb , and a trehalose hydrogel, synthesized by our lab as described in Chapter 2, were separately applied as soil amendments for tomato plants, Solanum lycopersicum, subjected to drought conditions. Performance of the gels was evaluated by monitoring tomato plant health through chlorophyll content, water potential, stomatal conductance, and relative growth rate measurements. We hypothesized that presence of the trehalose hydrogel would boost tomato plants’ physiological function after extended droughts . We also hypothesized that since the trehalosehydrogels were less hydrophilic than the Terra-sorb hydrogels, they would likely not be as efficacious as the positive control.Due to climate change, water availability has become more sporadic with cycles of drought and rewatering, which ultimately stresses plants.We therefore tested the ability for the hydrogels to retain their swelling ratio through repeated drying and wetting cycles. After purification and lyophilization, the trehalose hydrogels were swollen to their maximum capacity in 72 hours in deionized water. This drying-swelling cycle was repeated where the dry weight was taken after lyophilization and swollen weight was taken after swelling the gel in deionized water. The swelling ratio was calculated for each cycle by dividing the difference between the gels’ swollen weight and dry weight by the dry weight . Over the course of ten drying-swelling cycles , the hydrogels swelling ratio decreased from 16.3 ± 2.9 to 14.9 ± 1.1 . This minimal loss in swelling ratio during these cycles is an indicator that the gel could be subjected to multiple drought cycles without compromising its swelling abilities. Next, we evaluated how the water holding capacity of a sandy loam soil is affected by Terra-sorb and the trehalose hydrogel. We applied Terra-sorb at the manufacture’s recommended concentration, 0.4 wt %, and trehalose hydrogel at 0.4 wt % and 0.8 wt % . We saturated the soil then allowed it to desaturate over eight days while monitoring water loss by weight. All of the amendments improved the water holding capacity of the soil over the entirety of the experiment. Consistently, soil amended with Terra-sorb gels had the highest WHC, followed by soil with trehalose hydrogel at 0.8 wt % and then 0.4 wt %. We then rehydrated the soils to evaluate the gels’ capacities to work through multiple drying cycles. The conditioners maintained their previous trends and most of their WHC percentages. While WHC is an important factor for soil health, the water held by hydrogels is not necessarily available to crops.As such, it is vital to monitor plant growth in soil with the hydrogel amendments.Previous reports have demonstrated that hydrogel soil conditioners are not always effective in improving plant health and growth, and, in fact, are sometimes detrimental, depending on the soil type, plant species, and experimental conditions.So, before testing trehalose hydrogels directly, we ensured that tomato plants and our simulated drought conditions could benefit from soil conditioners by using commercially available hydrogel, Terra-sorb at 0.4 wt %, that has previously demonstrated delayed moisture loss for Quercus ruba seedlings subjected to short-term desiccation stress.