A large state space of course requires more computational power than MZA

Clearly, there is a significant difference between the most common clustering and the best clustering in almost every case . Another possibility is that the rare clusterings having the best BIC scores may not correspond to geographically meaningful EC maps. That is, the best BIC may correspond to a statistically meaningful solution that does not provide insight for soil zone management. Figure 3.9 shows the geographic mappings of the best BIC clusterings from Table 3.8. We assign different colors to each EC data point based on the cluster to which it has been assigned and then graph the data points based on latitude and longitude. From the figures, it is clear that the clusterings correspond to feasible zone management maps. That is, points belonging to the same cluster are often adjacent in geographic space indicating a strong EC mapping relationship. To illustrate in greater detail, consider the CAP dataset results. CAP is a lemon field with soil consisting of clay, sandy-loam, and sandy-clay-loam. Figure 3.8a shows that jobs from Job-5 on have very stable results with similar BIC scores. Figure3.9a shows the best clustering from Job-2048 with 4 clusters with cardinality [2103, 500, 473, 156] and a BIC score of -8918.35. We compare this result with the most common clustering for Job-2048 that occurred 1445 times with a BIC score of -10169.7 and two clusters having 2169, and1063 elements respectively . The visual difference between those two clusterings shows that most of the “disagreement” appears along cluster boundaries. In addition, we consider how clustering results compare to the soil samples taken at the CAP field. Figure3.10c shows the core samples taken at five different locations and their soil type. Out of five core samples available,vertical greenhouse the top two in the figure belong to the same cluster in both the best and the most common clustering .

The other three core samples report clay in the lower left corner followed by sandy-loam and sandy-clay-loam-cl. In the best clustering, they all belong to different clusters while the most common clustering puts all three core samples in the same cluster . Thus, the best clustering corresponds more closely to a core-sample analysis than the most common clustering. Note that for each fixed set of parameters , we ran 1, 228, 800 different experiments. This is the maximum frequency that can occur for a particular value of k resulting in the best or most common BIC. Table 3.8 shows that the most common clustering usually has fewer clusters while the best clustering provides higher resolution and therefore additional information that a farmer may find useful for management.The analysis of the other datasets is similar. In each case, the best BIC score is rare, requiring a large number of repeated trials, each with a different initialization to determine. In all but one case the best clustering differs substantially from the most frequently occurring clustering. The best clusterings correspond to meaningful EC soil maps and those maps correctly register with soil core samples.We start by comparing Centaurus against MZA for the synthetic datasets. We use the number that both FPI and NCE scores report for MZA as the optimal number of clusters. We then use the respective cluster assignment to compute the error rates. Figure 3.11 shows the best assignments produced by Centaurus and MZA and Table 3.9 shows the percentage of incorrectly classified points in each dataset, for the same assignments. For MZA, the best assignment is achieved by Mahalanobis distance and for Centaurus the best assignment is achieved by Full-Untied. MZA clusters the Dataset-1 correctly and reports K = 3 as the ideal number of clusters . For Dataset-2, MZA correctly identifies K = 3 but has a higher error rate of 13.8 A possible reason for this is that MZA only considers a single initial assignment of cluster centers, which in this case converges to a local minimum that is different from the global minimum. Centaurus avoids this kind of error by performing several runs of k-means algorithm before suggesting the optimal cluster assignment. Dataset-3 consists of clusters with correlation across features.

Centaurus provides better results than MZA for this dataset, achieving a percentage error of only 0.1 A possible reason for this is that MZA employs a global covariancematrix and does not consider Tied and Untied options as Centaurus does, which results in better label assignments. Another limitation of MZA is that it uses a free variable, called the fuzziness parameter, and multiple scoring techniques. It is challenging to determine how to set the fuzziness value even though the results are highly sensitive to this value. For the results in this section, we chose the default fuzziness parameter of m = 1.3 as suggested by the author Odeh et al. . Furthermore, for the farm datasets, the MZA scoring metrics do not always agree, providing conflicting recommendation and forcing the user to choose the best clustering. In combination, these limitations make MZA hard to use as a recommendation service for growers who lack the data science background necessary to interpret its results. Centaurus addresses these limitations by providing a high enough number of k-means runs, no free parameters, and more sophisticated ways of computing the covariance matrix in each iteration of its clustering algorithm. It uses a unique scoring method to decide what is a single best clustering that will be presented to a novice user while it provides the diagnostic capabilities that are needed for more advanced users.Moreover, FPI and NCE disagree more often than they agree for these datasets. For the Cal Poly dataset both scores agree only when m = 1.5 suggesting that k = 4 is the best clustering. For other values of m, MZA recommends cluster sizes that range from k = 2 to k = 5. For Sedgwick and m = 2.0 , FPI selects k = 3 and NCE selects k = 2. For UNL, no FPI-NCE pairs agree on the best clustering, with MZA recommending all values of for different m. Because fine-grained EC measurements are not available for the Cal Poly, Sedgwick, and UNL farm plots, it is not possible to compare the MZA and Centaurus in terms of which produces a more accurate spatial maps from the Veris data. Even with expert interpretation of the conflicting MZA results for Cal Poly and UNL, we do not have access to “ground truth” for the fields.

However, it is possible to compare the two methods with the synthetic datasets shown in Figure 3.1. Note that this evidence suggests Centaurus is more effective for some clustering problems but is not conclusive for the empirical data. Instead, from the empirical data we claim that Centaurus is more utilitarian than MZA because disagreement between FPI and NCE differing possible best clusterings based on user-selected values of m, can make MZA results difficult and/or error-prone to interpret for non-expert users. MZA recommendations may be useful in providing an overall high level “picture” of the Veris data clustering,vertical grow towers but its varying recommendations are challenging to use for making “hard” decisions by experts and non-experts alike. In contrast, Centaurus provides both a single “hard” spatial clustering assignment and a way to explain why one clustering should be preferred over another and which one is “best” when ground truth is not available. In contrast, Centaurus is able to use its variants of k-means, a BIC-based scoring metric, and large state space exploration to determine a single “best” clustering. The only free parameter the user must set is the size of the state space exploration . As the work in this study illustrates, Centaurus can find rare and relatively unique high-quality clusterings when the state space it explores is large.MZA is a stand-alone software package that runs on a laptop or desktop computer. In contrast, Centaurus is designed to run as a highly concurrent and scalable cloud service and uses a single processor per k-means run. As such, it automatically harnesses multiple computational resources on behalf of its users. Centaurus can be configured to constrain the number of resources it uses; doing so proportionately increases the time required to complete a job . For this work, we host Centaurus on two large private cloud systems: Aristotle Aristotle and Jet stream Stewart et al. , Towns et al. . Extensive studies of k-means demonstrate its popularity for data processing and many surveys are available to interested readers Jain et al. , Berkhin . In this section, we focus on k-means clustering for multivariate correlated data. We also discuss the application and need for such systems in the context of farm analytics when analyzing soil electrical conductivity. To integrate k-means into Centaurus, we leverage Murphy’s Murphy work in the domain of Gaussian Mixture Models. This work identifies multiple ways of computing the covariance matrices and using them to determine distances and log-likelihood. To the best of our knowledge, there is no prior work on using all six variants of cluster covariance computation within a k-means system. We also utilize the k-means++Arthur & Vassilvitskii work for cluster center initialization.The research and system that is most closely related to Centaurus, is MZA Fridgen et al. —a computer program widely used by farmers to identify clusters in soil electro-conductivity data to aid farm zone identification and to optimize management.

MZA uses fuzzy k-means Dunn , Bezdek , computes a global covariance and employs either Euclidean Heath et al. , diagonal, or Mahalanobis distance to compute the distance between points. MZA computes the covariance matrix once from all data points and uses this same matrix in each iteration. MZA compares clusters using two different scoring metrics: fuzziness performance index Odeh et al. and normalized classification entropy Bezdek . Centaurus attempts to address some of the limitations of MZA . We also show that although MZA provides multiple scoring metrics to compare cluster quality, the MZA metrics commonly produce different “recommended” clusterings. The authors of x-means Pelleg et al. use Bayesian Information Criterion Schwarz  as a score for the univariate normal distribution. Our work differs in that we extend the algorithm and scoring to multivariate distributions and account for different ways of covariance matrix computation in the clustering algorithm. We provide six different ways of computing covariance matrix for k-means for multivariate data and examples that illustrate the differences. Different parallel computational models have been used in other works to speed up the k-means cluster initialization Bahmani et al. , or its overall runtime. Our work differs in that we provide not only a scalable system but include k-means variants, flexibility for a user to select any one or all of the variants, as well as a scoring and recommendation system. Finally, Centaurus is pluggable enabling other algorithms to be added and compared. The Internet of Things is quickly expanding to include every “thing” from simple Internet-connected objects, to collections of intelligent devices capable of everything from the acquisition, processing, and analysis of data, to data-driven actuation, automation, and control. Since these devices are located “in the wild”, they are typically small, resource-constrained and battery-powered. At the same time, low latency requirements of many applications mean that processing and the analysis must be performed near where data is collected. This tension requires new techniques that equip IoT devices with more capabilities. One way to enable IoT devices to do more is to use integrated sensors to estimate the measurements of other sensors, a technique that we call sensor synthesis. Since the number of sensors per device is generally bounded by design constraints, sensor synthesis makes it possible to free up resources in IoT devices for other sensors. We focus on estimating values of measurements where estima-tion error is low, freeing up space for sensors with measurements that are harder to estimate. Many, if not most, IoT systems for precision agriculture depend on and integrate measurements of real-time, atmospheric temperature. Temperature is used to inform and actuate irrigation scheduling, frost damage mitigation, greenhouse management, plant growth modulation, yield estimation, post-harvest monitoring, crop selection, and disease and pest management, among other farm operations Ghaemi et al. , Stombaugh et al. , Ioslovich et al. , Roberts et al. , Gonzalez-Dugoa et al. .

Qualitative data was analyzed through coding responses to specific questions

We do not know the fate of 15 ghost CSAs, as no definite statement of closure could be found and all contact attempts failed. Of the other 13, some left farming, some were still farming but without CSAs, and one moved out of state and continues to farm. Removing all of these from the study left 74 CSAs that met our definition. Primary data collection occurred from January 2010 to April 2011 and involved two components: a semistructured interview and a survey conducted through an online questionnaire. All 74 CSAs were contacted by phone and e-mail. Fifty-four CSA farmers and two CSA organizers, together representing 55 CSAs, agreed to participate in the study and were interviewed. In most cases, we interviewed the farmers directly responsible for the CSA operation, but two cases were different. In one, a CSA organizer worked with two farms to create an independent CSA; one of these farms also had its own CSA, while the other farm only sold through the CSA run by the organizer — these two farms and the organizer count as two CSAs. In the other case, the CSA organizer brought many farmers, none of whom have their own CSA, together to form one CSA. Forty-eight of the 54 CSA farmers interviewed completed the survey; the others did not after repeated reminders. We did not request survey responses from the CSA organizers. We used the qualitative data from farmers and the two CSA organizers who did not complete the questionnaire, but we were unable to include their information for most quantitative data.We analyzed the quantitative data by creating summary statistics of various characteristics, with some bivariate statistical analysis.

In the interviews, we asked CSA farmers about the prices for their CSA shares, how their CSA delivery systems worked,vertical hydroponic whether they bought supplemental produce from other farms, and the extent that they used volunteers on the farm; and in the survey, we asked about the types of food and other products in their shares, minimum payment periods and events hosted at the CSAs. As a result, CSA types emerged that differed from our original conception of a CSA — that members shared risk with the farm and paid for a full season up front. None of the CSAs had a formal core member group deciding what to produce, none had mandatory member workdays, and many did not require long minimum payment periods or share production risks with members.The membership/share model requires customers to make an upfront membership or share payment. It is rare; only four of the 48 CSAs operated this way. Two of the four CSAs used only the membership model; the other two combined it with the box models by offering member discounts. The membership payment is paid prior to actually picking up the produce. Members give the farmer some amount of money, which becomes credit for use at the farm’s U-pick, farm stand or farmers market stall. Members do not pick up a set amount of produce but are able to pick and choose, and receive a discount by paying in advance. With share payments, members can sign a contract to own a share of a farm animal, and the share payment covers the animal’s feed. The member then purchases that animal’s products. The member does not get any discount for their share but is able to gain access to locally raised and processed animal products, which are not widely available in the region. He or she is also sharing the risks associated with raising livestock with the rancher or farmer.These differently arranged enterprises, all called CSAs by their operators, demonstrate a central finding: Much innovation is occurring in how farmers and consumer members connect through a CSA.

Farmers have adapted the CSA model to their ambitions for their farm, to innovative products and to regional conditions. CSA farmers have different preferences for their operations. Some want to remain small, while others want to grow; these goals require different strategies. Farmers have added new products, especially meat and dairy, into their CSAs, although the processing of those products does not fit easily with handling practices developed for fruits and vegetables. Other innovations include changing CSA payment and delivery systems so that they are more attractive and accessible to people who are not familiar with the concept and to consumers who cannot afford a large upfront cost, both of which are important realities in the Central Valley. For example, 20% of CSAs in the study had no minimum payment period, allowing week-by-week payments, which extends membership to a broader population, including those hesitant or unable to commit to extended payments. Requiring no long-term commitment was also a common practice among meat CSAs in our sample, which often do not know exactly which products will be available and when, including both individual cuts and type of meats. This uncertainty stems from maturation, slaughtering and butchering processes. Few slaughter and butcher facilities serve small-scale producers. Consequently, CSA meat producers compete with large-scale operations for limited processing capacity, and there is greater variability in their animals’ maturation because they are raised primarily on pasture. Scheduling difficulties can result; for example, during the summer, CSA ranchers may need to schedule slaughtering months in advance, but their animals may not be ready by the scheduled date. Meat CSAs rely on committed customers who agree, typically on a monthly basis, to buy some amount of a variety of meat.To understand their economic viability, we asked CSA farmers about gross annual sales and net profits in 2009, the CSA’s contribution to the total economic activity of the farm, other marketing channels used and how the farmers valued their labor.

In the survey, we asked about whether partners held off-farm jobs and the CSA’s general profitability. We found that the CSA was a crucial direct to-consumer marketing channel for the small- and medium-scale farmers in our study. On average, the farmers obtained 58% of gross sales from their CSA. In general, small-scale farmers were more dependent on their CSA than larger-scale CSA farmers. Most farmers also sell into other channels, including wholesale and direct-marketing venues, especially farmers markets. Some farm-linked aggregator box CSAs act as wholesale outlets for small farms with their own CSAs. Farmers in our study commonly chose the CSA as a marketing outlet to diversify their income channels. Some had little access to organic wholesale markets, while others wanted to increase sales beyond farmers markets and other direct sales. Some newer farmers started with a CSA to help raise needed capital. As motivations for choosing a CSA, most respondents mentioned the advantages of knowing sales volumes in advance and being paid up front, before the growing season begins. Assessing the economic viability of CSA operations is difficult because it involves both the baseline profitability of the business and the need to generate sufficient income for retirement, health insurance, college for children, land purchases and so on. In addition, farmers conceptualize profit differently. Some consider their salaries as profit, while others set aside a salary for farm partners and consider profit to exclude this salary. Not all farmers amortize their accounting, and many reinvest surpluses in the farm to make it more productive or reduce taxes. Consequently, we asked a variety of questions about farm economics.When we asked farmers in the interviews why they wanted to do a CSA and the general philosophy behind their farm and CSA operation, most were not interested solely in maximizing sales,vertical farming supplies profit or their salaries. When asked about their motivations and farming philosophy, CSA farmers said they loved farming, felt satisfaction in providing fresh food to their communities and educating people about food and agriculture, and wanted to make positive change. As one farmer noted, “The world’s messed up, and we’re fixing it — one family at a time, one farm at a time” . Although that sentiment was common, CSA farmers’ political commitments ranged from libertarianism to socialism to evangelical Christianity to feminism. We also found a diversity of views on the CSA as a business: Many saw their CSA as promoting their deeply held values, independent of maximizing profit. For example, one newer CSA farmer said: “I really want to empower other women to work in sustainable agriculture . . . Almost all our applications for internships are from women, probably 75%, but there aren’t that many women farmers” . CSA farmers frequently mentioned receiving nonmonetary forms of compensation: tangible benefits such as living and/ or raising children on a farm, benefiting from improvements to the property, eating well and living healthfully; and intangible ones such as the lifestyle and deeply rewarding hard work. One farmer noted: “We don’t keep track of hours ‘cause that would be depressing from a pay standpoint. But we just love it. We probably should [do time tracking], but on the other hand, it’s part of the lifestyle. It isn’t jobby at all.

We have what we need to get by, but we don’t pay ourselves an official wage” . Some farmers in our study ran their CSAs to make money, although all did so within the context of broader social and environmental commitments. As an example, farmer 39A and farmer 39B, a husband and wife team, respectively said their philosophy for the CSA was to “make money to send children to college,” and “capitalism — you have to be greedy, grubbing capitalists.” However, they went on to illustrate their underlying environmental and social commitments. When farmer 39A said, “We always try to be the top of the market in terms of quality and price,” farmer 39B added that they value growing the “most nutrient-dense food [and] finding a supportive community to reward us for doing it.” Driving home the point that their  profit orientation is securely underpinned by a broader ethos, farmer 39A added, “We are also committed to offering our employees year-round employment in a toxic-free environment.”We asked many questions about the CSA farms, including survey questions about start year, farm size, area in various land uses, number and kinds of crops and farm animals, general practices in relation to the federal organic standard, electricity generation, farm inputs, water use and land tenure. In the interview, we asked open-ended questions, including “How did you get access to the land you’re currently using for your CSA?” and “What practices do you do that you think are most beneficial to the environment?” We found out that most CSAs in our study were relatively new, in existence for 5.7 years on average. CSA farms shared certain core features, especially a commitment to environmental conservation, agroecology and agrobiodiversity . The farms were diverse across a range of characteristics, including farm size, land ownership, organic certification and membership numbers.CSA farmers in our study cultivated a tremendous amount of agrobiodiversity, growing 44 crops and raising three types of livestock on average. Most CSAs studied focused on vegetables, although some were exclusively focused on fruit, one on grain, and a handful on meat and other animal products. About half of CSAs studied had livestock in 2009. The most common animals were layer chickens , followed by hogs and pigs , goats and kids and broilers, sheep and lambs, and beef cattle . Many CSA farms also had some land devoted to conservation plantings, such as hedgerows where birds and beneficial insects can live. As one farmer noted, “I have a very strong view that agriculture doesn’t need to and shouldn’t decrease the vitality, the biodiversity of the environment . . . [agriculture] can actually enhance it” . In the Central Valley, the CSA farmers’ commitment to agrobiodiversity contrasts with the monocultures that dominate the landscape. Agrobiodiversity is supported by the unique nature of CSAs. Many farmers noted that providing diversity in the box is a key strategy for maintaining CSA members, and that this had translated directly into diversity in crops and varieties on the farm. Regarding her CSA’s first member survey, one farmer noted that members wanted “more fruit and more diversity. We immediately planted fruit trees and told our members, ‘We are planting these fruit trees for you; wait 4 years for some peaches’” .

The financial value of publicly traded firms has also been shown to suffer from food scares

The risk of contamination, thus, typically includes the cost of withdrawing product from the market and lost revenue from a portion of harvest. Larger operations, therefore, have greater risk from contamination, though the risk from forgone revenues and the associated costs of product recalls are presumed to increase proportional to sales. This component of risk is likely to be scale-neutral, suggesting that the optimal private investment in food safety is independent of whether a product is produced in a concentrated industry or by many small firms. But large firms are more likely than small operations to have brand capital, which is threatened by food scares traced to their products. The loss of a firm’s good reputation can occur as the consequence of a food scare independent of the magnitude of the scare and the firm’s market share. With the loss of reputation, a brand-name product would lose its price premium. Sales and margins for products produced by the firm unrelated to the food scare are also likely to suffer.Large firms with good reputations, therefore, stand to incur losses from lapses in food safety that are disproportionately higher than small firms. Losses to a firm from an outbreak of food borne illness often also include product liability for related illness and death. Judgments can easily reach into tens of millions of dollars. If food contamination occurs in the field, then the magnitude of the outbreak may be independent of the market share of the responsible producer. However, if contamination were to occur in a processing facility,hydroponic vertical farming then the greater quantities handled in the facility and distributed through a wider network could cause illnesses and fatalities to be greater for outbreaks caused by larger firms.Liability is limited to the assets of the firm; thus, it increases as production increases because the assets of larger producers are greater.

Limited liability explains, in part, why firms that operate in hazardous industries with latent damages tend to be smaller than firms in other industries. Divestiture is recognized as a mechanism to limit liability. Small growers, then, face an upper bound on the risk of food contamination that lowers their incentives to invest in food safety. For instance, the Colorado cantaloupe grower whose melons are implicated in 32 deaths filed for bankruptcy in May 2012, listing its net worth at -$400,000. The farm’s owners, therefore, avoid potentially tens of millions of dollars in liability. Because large firms generally have more assets, they face greater exposure from product liability, and, therefore, demand greater protection against food contamination. Large firms and small firms alike can insure against product liability, but premiums are positively correlated with coverage limits and with past lapses in food safety, so that large firms demand greater ex ante prevention of food contamination even if they are insured. Large firms face more-than-proportionally greater risk from selling tainted food, and, consequently, have greater demand for food safety. They also face lower average variable costs in supplying food safety. Therefore, the optimal level of investment, which equates the marginal benefit of an incremental increase in food safety with the marginal cost of achieving it, is increasing at an increasing rate in firm size. Holding market supply of food constant, then, the provision of food safety declines as the number of producers grows. Empirical evidence from the meat and poultry-processing sectors indicates that firm size is an important determinant of firm-level investment in food safety. For instance, research suggests that firm size has more impact on the adoption of safety and quality assurance practices than any other firm characteristic. Large firms are also more likely to have adopted a range of food safety technologies.The losses stemming from an outbreak of food borne illness often are not limited to those firms implicated in food contamination. Indeed, food safety regulators issue broad warnings about food products irrespective of where they originated if the origin of contamination cannot be immediately identified.

For instance, when in 2006 an outbreak of E. coli was linked to consumption of bagged fresh spinach, the FDA issued a blanket warning to consumers to avoid the product altogether. Fifteen days later, the alert was scaled back to include a warning against consumption only of specific brands of spinach packaged in California on specific days, but the industry had already experienced a dramatic decline in sales. The outbreak, blamed for 199illnesses and three deaths, cost spinach producers $202 million in sales over a 68-week period, a 20% decline. By 2008, demand for California spinach remained below pre-outbreak levels. Because a food scare can induce a negative demand shock, there exists a reputational externality associated with food safety provision, which, consequently, exhibits public good characteristics. The benefits of an individual firm’s food safety investments accrue, in part, to competing firms. Because the firm does not capture the full benefits of its food safety provision, it will under invest in food safety relative to the efficient level. Industry-wide private provision may be much too low. A firm’s share of the benefit from an investment in food safety, however, is increasing in its market share. The more concentrated the industry, therefore, the closer is the equilibrium level of food safety provision to the efficient level. A monopolist would internalize the full benefit of his investments, and, therefore invest in the efficient level of food safety. As the number of small firms increases, however, the equilibrium market-wide provision of food safety falls increasingly short of the efficient level.Because agronomic and climate conditions impact the optimal handling and processing of crops, they afford some firms and regions a comparative advantage in producing safe food. For instance, the hot and dry conditions during the cantaloupe-growing season in California reduce the crop’s exposure to contaminants that can be transferred to melons in wet fields. Moreover, because the dry conditions keep California cantaloupe relatively clean, most are packed directly in the field, requiring less handling and avoiding exposure to food pathogens in shed packing operations that rinse and dry the produce.

As retailers seek to market local produce, however,vertical hydroponic garden comparative advantage in the provision of food safety is forsaken. The Listeria outbreak last summer was linked to unsanitary conditions at the Jensen Farms packing shed. And an FDA investigator identified unsanitary practices at the Chamberlain Farms packing shed that has been associated with this summer’s Salmonella outbreak. Concentration of production in regions with comparative advantage creates agglomeration advantages for the mutual provision and certification of food-safety practices. Because of the potential losses from food scares and the market-wide externalities from food safety investments, grower organizations have adopted voluntary process standards to mitigate risk and avoid shirking among their members. Some growers have also created marketing orders to enforce handling practices and require audits of all operations covered by the agreement. The California Leafy Green Product Handler Marketing Agreement was implemented in 2007, following the 2006 E. coli outbreak linked to spinach from California’s Salinas Valley. The California Cantaloupe Advisory Board responded to last summer’s Listeria outbreak by imposing mandatory certification by state auditors of all growers in the state. Such industry-wide cooperation and self policing is likely to be lost when production is fragmented and spread across wide geographical areas in the quest for local production.Irrigated agriculture exerts strong controls on global food production yields and the water cycle while accounting for 85-90 percent of human consumption . However, little is known about the spatial distribution of agricultural fields, their crop types, or their methods of irrigation. Spatially explicit knowledge of these field attributes is necessary in order to implement more water-efficient agricultural practices and plan for more sustainable economies . However, most remote sensing based mapping efforts cover limited political boundaries, on the order of U.S. states or smaller, and usually only cover a snapshot in time. Agriculture maps in developing countries are even more lacking in semantic detail, coverage, and resolution, which is a particularly acute problem given that agricultural expansion in these regions tends to be decentralized and without a guiding management plan for water sustainability. While a fine scale and up-to-date census of global agriculture does not yet exist, it is feasible that we can map a subset of fields that are spectrally and visually distinct from surrounding land cover and numerous enough to train a machine learning model to map a substantial subset of global agriculture. Center pivot irrigation fits these criteria; they are relatively uniform in shape, have a narrow range of sizes within particular geographies, and in drylands, can be strongly contrasted with the surrounding lack of vegetation .

Like all remote sensing applications for agriculture detection, image obstruction by clouds, fallow periods and the growth of non-agricultural natural vegetation in the landscape all pose challenges for models that detect center pivots. In more humid environments, center pivots have distinctive growing season patterns and larger amplitude changes in vegetation “greenness” compared to other vegetation types, making them readily identifiable with time series of multi-spectral images. However, it’s difficult to assemble a comprehensive time series of imagery over many parts of the world, primarily due to cloud occlusion and scene availability. Scene availability is particularly low for dates prior to the launch of the Sentinel-2, so a method to map center pivots using single date imagery in many parts of the world is desirable. Such a method is particularly valuable given that center pivots are one of the most ubiquitous irrigation sprinkler systems employed in large scale commercial agriculture and make up a considerable fraction of the unplanned agricultural expansion in developing countries. There are a variety of methods for mapping objects using remotely sensed imagery, but object based classification has been shown to outperform per-pixel classifiers in cases where the object of interest contains enough pixels to distinguish objects by textural or shape properties . Because center pivots are large relative to the resolution of public sensors like Landsat, they are amenable to being mapped using object based classifiers. The traditional approach to object based classification in remote sensing has been to use manually tuned algorithms to delineate edges or engineered features that capture texture and shape properties combined with per-pixel machine learning algorithms . However, traditional object based classifiers in remote sensing have a tendency to over fit and must be manually tuned or supplemented with region specific post processing to arrive at a suitable result. Another class of methods which make use of convolutional neural networks have achieved great success on complex image recognition problems in true color photography. These have not been thoroughly evaluated for mapping agricultural fields as instances across a large climate gradient using Landsat imagery. The goals of this research are twofold. First, I evaluate the performance of convolutional neural network based instance segmentation models on Landsat imagery. This experiment determines if CNN based models can make use of Landsat’s 30 meter resolution to provide reliable predictions of the locations and extents of center pivot agriculture in various states of development, including cleared, growing, and fallow. I test this approach by using the current most popular and near state of the art Mask R-CNN model, an approach based on a lineage of regional CNNs which jointly minimize prediction loss on region proposals, object class, refined object bounding boxes, and an object’s instance mask. This model is tested on the 2005 CALMIT Nebraska Center Pivot Dataset, which is divided into geographically independent samples that were partitioned into a training, validation and testing set. Multiple model runs with varying hyperparameters and preprocessing steps were conducted to arrive at the most accurate result on the validation set, and the final most accurate model was applied to the test set to produce the final reported accuracy. The model is also evaluated on the full training data set and 50% of this dataset in order to examine the effect of reduced training data on model accuracy over a large region. Second, I compare these results to the Fully Convolutional Instance Aware Segmentation model, which was previously the state of the art in instance segmentation in true color photography prior to Mask R-CNN.

Crop yield and quality are measured annually at harvest

Kerala faces an economic situation that encourages farmers to adopt practices that exacerbate climate change and biodiversity loss, erodes coffee quality, and undermines farmer’s livelihoods. In the biodiversity hotspot of Wayanad, Kerala, weather is getting hotter and drier, particularly during months when precipitation is vital to the life cycle of the coffee plant. These trends compound the existing threats faced by farmers and biodiversity in the region. Personal experience of farmers in Wayanad was corroborated by quantitative analysis. Researchers studying agricultural systems should engage with local constituents to guide research efforts, particularly in regions where tribes have yet to be disenfranchised and ancient knowledge is still intact. Reforestation, reinstatement of traditional intercropping methods, and regeneration of healthy soils are likely to be the most effective strategies for both climate and economic resilience. Price stabilizing mechanisms such as a guaranteed floor price should be reinstated, as they were in the past, to protect farmers against multinational vested interests. Policies that incentivize carbon sequestration and reforestation should be implemented in order to mitigate and adapt to climate change, and to provide better livelihoods for the people that grow crop commodities. Consumers should be engaged and educated on these issues in order to shift market forces towards business practices that support these efforts. Readers of all affiliations should consider the global impacts of their daily choices as consumers, strive towards lifestyles that eliminate frivolous use of resources and promote ethical economies,commercial greenhouse supplies and participate in the world in a responsible way for the sake of all life on Earth, including present and future generations.

Agricultural productivity in the United States has increased dramatically over the last few decades, but in the face of climate change current management practices might not sustain current levels of production . Some practices that achieve high crop yields and profit — for example, minimal use of crop rotations, high rates of fertilizer and pesticide inputs, minimal carbon inputs and soil disturbance — also result in degradation of ecosystem processes on which agricultural systems rely. Such degradation can reduce resilience, making these systems more vulnerable to high temperatures and uncertainty in water supply, resulting in lower productivity in times of extreme weather conditions, such as prolonged drought . Climate-smart agriculture means increasing resiliency to extreme and unpredictable weather patterns induced by climate change by following three principles: developing agricultural cropping systems that are productively resilient in the face of climate change; reducing greenhouse gas emissions attributable to agriculture to further reduce contributions to global warming; and proactively and adaptively managing farms in a way to buffer farm productivity and profitability against the negative effects of climate change. To make informed, evidence-based management decisions under new climate change regimes, data is needed from long-term agricultural experiments, few of which exist. As weather and climate patterns change, repeated measurements over decades can reveal what may be slow but incremental changes in crop yield and quality, as well as soil quality and biodiversity . A long-term agricultural experiment, known as the Century Experiment, is underway at the Russell Ranch Sustainable Agriculture Facility , a unit of the Agricultural Sustainability Institute at UC Davis. RRSAF is a 285-acre research facility and working farm where, under realistic commercial scale conditions, controlled long-term experiments are testing a variety of crop systems and management practices related to fertility and nutrient management, irrigation and water use, energy use, greenhouse gases and soil health.

The Century Experiment was designed as a 100- year replicated experiment. It was initiated in 1992, when environmental and soil conditions were monitored as a baseline prior to installation in 1993 of 10 cropping systems across 72 one-acre plots; since then, one additional cropping system and restored native grassland reference plots have been introduced . Soil and plant samples are collected regularly and analyzed, and sub-sampled for archive and future analysis.Energy use, inputs and outputs are monitored for all equipment and groundwater pumping throughout the year. The interior of each 1-acre plot in the Century Experiment is maintained consistently for collection of the long-term dataset. Microplots and strips within each plot are available for additional experimental investigations, which have included the impacts of different fertilizers or crop varieties, pest management practices, tillage practices and soil amendments. RRSAF research is also conducted in additional plots that are not part of the Century Experiment to focus on questions that explore practices that may ultimately be adopted within the main experiment. This research includes targeted investigations of soil amendments, irrigation frequency and type, and new crop varieties, and it permits side-by-side comparisons of management history on the effectiveness of different practices. UC and UC Agriculture and Natural Resources researchers and the RRSAF team collaborate regularly with local growers, as well as with researchers from other institutions throughout the United States and around the world, so that the research addresses local issues and also has broader relevance for agriculture in Mediterranean climates worldwide. Maintaining healthy soils is a key to climate-smart agriculture. Properties such as porosity, water retention, drainage capacity, carbon sequestration, organic matter content and biodiversity all help to confer resilience to new pest and disease pressures and to extremes in temperature and water availability .

The California Department of Food and Agriculture’s Healthy Soils Initiative, launched in 2016, reflects the state’s commitment to improve the quality of managed soils . Encouraging best practices for maintaining healthy soils will increase biodiversity as well as beneficial physical and chemical properties of soil. Improving these properties will, in turn, confer resilience of agro-ecosystems to uncertainties in climate, including unpredictable rainfall patterns, new extremes in temperature and unexpected shifts in the distribution of pests and diseases . Intensive soil sampling is a key part of the Century Experiment. Plots are sampled at least once every 10 years to as deep as 3 meters in eight depth intervals, and a number of chemical and physical properties are measured. After 20 years, cropping systems, with few exceptions , either maintained or increased total soil carbon content to a depth of 2 meters . Soil carbon increased significantly more in the organic tomato-corn system than it did in any other crops and management systems. Soil infiltration rates and aggregate stability were also greater in the organic than conventional tomato corn system. This research also identified specific soil fractions where early changes in carbon sequestration can be detected, to help predict which practices promote increases or decreases in soil carbon. Changes in soil biology were evident as well: microbial biomass was 40% higher in soils in organic than in conventional tomato-corn rotations, and microbial community composition under organic and conventional management was distinctly different. More indepth analyses of the soil biota, including sequencing of soil microbial communities and measuring abundance of mycorrhizal fungi, are underway.Use of agricultural and food wastes, and cover crops, can reduce dependency on synthetic fertilizers that rely on fossil fuels and generate greenhouse gases in their synthesis. Also, use of soil organic amendments helps organic and conventional growers to “close the loop” by reducing energy and environmental costs of waste disposal,vertical grow and recycling valuable nutrients back into the soil. At RRSAF, composted poultry manure and winter cover crops provide sufficient nitrogen and other nutrients to the organic tomato-corn rotation. Organic tomato yields for 20 years under furrow irrigation were not significantly different from conventional tomato yields. Soil amendments and winter cover crops have led to increased soil carbon sequestration, higher infiltration rates and greater aggregate stability in the organic system compared to the conventional systems; however, these benefits may be of limited interest to growers if yields are substantially reduced. A challenge is how to combine use of organic inputs with subsurface drip irrigation for organic systems. Organic relies on solid sources of fertility, for example, cover crops and compost, that cannot be delivered in the drip line, and that rely on microbial activity to convert them into plant-available forms. In SSDI systems, only a limited area of the bed is wetted and microbial activity may be reduced. Researchers at RRSAF are investigating the feasibility of using different combinations and forms of solid and liquid organic amendments in organic tomato-corn rotations. This is particularly timely as interest in organic farming and products increases . In 2012, a long-term experiment was initiated with the soil amendment biochar, a form of charcoal made from pyrolysis of organic waste materials. Application of biochar to tomato-corn rotations at 10 tons per hectare resulted in corn yields increasing in year 2 by approximately 8%, but no other yield effects were observed over 4 years .

Biochar had no impact, however, on soil water retention. These results underscore the importance of being able to draw conclusions based on long-term research, and the experiment continues to be monitored. Water quantity and quality are critical concerns for climate-smart agriculture in chronically drought-afflicted California . SSDI may increase crop yields, reduce weed pressure and improve water management in conventionally managed systems , but the trade-offs associated with other impacts of SSDI, such as changes to soil moisture patterns, reduced microbial activity, altered accumulation of salts and reduced groundwater recharge, have received little attention. At RRSAF, researchers are comparing effects of furrow versus drip irrigation on crop yields, root growth, microbial communities and soil structure. Many changes, such as soil aggregate structure, are not evident immediately and require long-term experiments to understand and resolve. Irrigation scheduling is another focus of water management at RRSAF. Different methods and associated technologies have been compared for estimating irrigation needs, including methods based on evapotranspiration , soil moisture sensors, plant water status and remote-sensing data. In tomatoes, an ET-based method was found to better predict crop water needs than soil sensor–based methods. Research projects at RRSAF have also addressed other aspects of climate-smart agriculture. These include development of farm equipment that reduces soil disturbance and energy consumption; application of sensor technology in collaboration with NASA’s Jet Propulsion Laboratory to support data-driven management choices in response to climate variation; and comparison of the efficacy of smart water meters in groundwater wells and irrigation systems. Other investigations have measured the feasibility of using dairy and food waste bio-digestate that can help offset consumption of fossil-fuel based fertilizers; tracking changes in wheat cellulose via isotopic methods to monitor plant responses to climate change; and measuring lower greenhouse gas emissions under SSDI than furrow irrigation. New varieties of climate-smart crops, such as perennial wheat, are being evaluated for their yield and resilience in California’s Mediterranean climate. In its 20 years, the Century Experiment has demonstrated a unique value in generating climate-smart data — for example, which practices enhance carbon sequestration in California row crop soils, how irrigation can be managed to reduce greenhouse gas emissions, and what sensors help most in reducing water consumption. Future research will address how soil biodiversity, such as the symbiotic mycorrhizal fungi, can be harnessed to reduce water and nutrient inputs, and increase crop resilience. Researchers exploring mechanisms driving short- and long-term responses to global change can guide the development of decision support models that incorporate economic, agronomic, ecological and social trade-offs and provide support for decision-makers — growers, policymakers, researchers — to make management decisions in the face of increasing climate uncertainty. The American Agricultural Economic Association is composed of various groups ranging from industry to government to academia with widely divergent values and interests. This has lead to controversy, sometimes healthy and other times destructive, on the appropriate mode for graduate training and methodologies of research. These differences affect the direction and vitality of the profession and imply both benefits and costs in pursuing the solutions to various problems and issues. Pressures.for day-ta-day decision making in industry have led to reliance on methodologies that are often characterized as unacceptable for journal publication. Similarly, the timeliness of analyses in governmental policy-making processes sometimes does not lend itself well to publication in professional journals. In contrast, the research sophistication that has emerged in academic circles has reputedly widened the divergences among various groups within the AAEA. In this setting a number of personalized views have been expressed.

Higher-value crops including produce and cash crops may also be more sensitive to weather

Few products suitable to agricultural livelihoods are available, and despite the wide proliferation of microfinance institutions , most are limited to non-agricultural activities given there are substantial challenges inherent in long-cycle agricultural lending . Lenders in these contexts charge high interest rates to help offset their assessment of the risk that loans will not be repaid. These higher interest rates can, perversely, have the effect of attracting only borrowers with no intention of repaying , thus driving interest rates even higher, as lenders seek to offset increased risk , further reducing access to credit for the small-scale farmer . Group-liability microfinance models, though popular in urban markets to reach low-income borrowers through social guarantees, may be ill-suited to serve smallholders in contexts where dominant risks driving default like weather and price shocks are common among members in the localized group. Group members will be unable to insure other members who cannot pay off a loan if, for example, everyone’s harvest is devastated by the same local flood or pest. On the demand side, demand from farmers for formal credit products is low. Even where formal financial products are available, farmers may opt to borrow money from within their social networks, or informal lenders. Preliminary findings from the rollout of Kshetriya Grameen Financial Services , a microfinance portfolio in rural Tamil Nadu,aeroponic tower garden system show that 72% of farmers’ loans at the beginning of the season are from formal sources, but only 35% are from formal sources by the end of the season.

Farmers seem to shift to informal borrowing given quick loan approvals and more flexible loan terms are available. This use of informal borrowing is particularly prevalent among marginalized farmers: 82% of the agricultural loans taken out by marginal farmers were from informal sources compared to 46% among medium-landholding farmers . Even where formal financial services are available, they are often highly disadvantageous to smallholder farmers. Farmers’ credit needs are different from urban micro-credit customers for which the common micro-credit products are designed, with weekly repayments and group liability. Most loan offers and repayment schedules are poorly timed to fit seasonal production cycles and price fluctuations. Uncertainty or risk aversion can also make farmers hesitant to take on debt. Profits in farming are uncertain, and are often low without complementary investments. Options for collateral to back a loan are limited in these environments, and assets like land may be too fundamental to basic livelihood to risk in order to access a line of credit , or unacceptable to back a loan in insecure contracting environments. Accessing and using financial products can also be even more difficult for farmers without high levels of financial literacy.technology adoption, pulling from a range of rigorous non-experimental work and some theoretical work to characterize the constraint facing farmers. We then summarize findings from recent randomized evaluations in an effort to distill policy-relevant insights.Agricultural income streams are characterized by large cash inflows once or twice a year that do not align well with specific times when farmers need access to capital to either make agricultural investments or, for example, pay school fees.

If there is limited access to credit in an area, farmers may not have cash on hand to make agricultural productivity investments unless they are able to save, or can afford the potentially high interest rates of informal lending. However, saving can be difficult for farmers given their limited resources, a variety of demands on their money, and the seasonal cycle of production and prices of their agricultural production. Credit and saving products could help farmers make investments in inputs and other technologies by making cash available when needed. Yet many developing countries, and particularly rural areas, have limited access to formal financial services that could provide this liquidity. Credit constraints have been reflected in farmers self-reports , and are associated with less use of productive inputs like high-yielding varieties . On the supply side, formal financial service providers are often unwilling or unable to serve smallholders. Few products suitable to agricultural livelihoods are available, and despite the wide proliferation of microfinance institutions , most are limited to non-agricultural activities given there are substantial challenges inherent in long-cycle agricultural lending . Lenders in these contexts charge high interest rates to help offset their assessment of the risk that loans will not be repaid. These higher interest rates can, perversely, have the effect of attracting only borrowers with no intention of repaying , thus driving interest rates even higher, as lenders seek to offset increased risk , further reducing access to credit for the small-scale farmer . Group-liability microfinance models, though popular in urban markets to reach low-income borrowers through social guarantees, may be ill-suited to serve smallholders in contexts where dominant risks driving default like weather and price shocks are common among members in the localized group. Group members will be unable to insure other members who cannot pay off a loan if, for example, everyone’s harvest is devastated by the same local flood or pest. On the demand side, demand from farmers for formal credit products is low.

Even where formal financial products are available, farmers may opt to borrow money from within their social networks, or informal lenders. Preliminary findings from the rollout of Kshetriya Grameen Financial Services , a microfinance portfolio in rural Tamil Nadu, show that 72% of farmers’ loans at the beginning of the season are from formal sources, but only 35% are from formal sources by the end of the season. Farmers seem to shift to informal borrowing given quick loan approvals and more flexible loan terms are available. This use of informal borrowing is particularly prevalent among marginalized farmers: 82% of the agricultural loans taken out by marginal farmers were from informal sources compared to 46% among medium-landholding farmers . Even where formal financial services are available, they are often highly disadvantageous to smallholder farmers. Farmers’ credit needs are different from urban micro-credit customers for which the common micro-credit products are designed, with weekly repayments and group liability. Most loan offers and repayment schedules are poorly timed to fit seasonal production cycles and price fluctuations. Uncertainty or risk aversion can also make farmers hesitant to take on debt. Profits in farming are uncertain, and are often low without complementary investments. Options for collateral to back a loan are limited in these environments,dutch bucket for sale and assets like land may be too fundamental to basic livelihood to risk in order to access a line of credit , or unacceptable to back a loan in insecure contracting environments. Accessing and using financial products can also be even more difficult for farmers without high levels of financial literacy.These credit market inefficiencies result in limited access to liquid capital from formal financial services. There is policy appetite to leverage new technologies and approaches to expand formal credit and savings mechanisms to rural households, particularly given the proliferation of micro-credit in urban markets. But even where micro-credit has expanded widely among low-income urban clientele, evidence from randomized impact evaluations show limited ability for micro-credit to transform the average entrepreneur’s business productivity and revenues, instead providing value through increased flexibility in how households “make money, consume, and invest” . In the smallholder context, we focus specifically on whether expanding access to formal credit on the margin of what is already available shows potential to unlock productive, profitable investments that improve rural livelihoods. Where expanding access to credit shows potential, studies investigate product designs aiming to increase credit access and their benefits specifically for smallholder farmers.Although the experimental evidence suggests that an injection of credit alone is unlikely sufficient to transform smallholders’ livelihoods, there is some encouraging evidence from approaches with careful product design. Financial service design innovation, particularly to encourage storage or savings, can generate more supportive services for farmers that can help them make investments or manage their volatile livelihoods. There is policy appetite to identify whether digital financial services will be able to connect rural borrowers to lending institutions and encourage financial behavior conducive to agricultural investment. More research is needed on these digital financial service channels and product designs, to understand their potential to support farmers’ financial portfolios in a manner that protects farmers while encouraging profitable investments. More research is needed to develop and test credit product designs and delivery channels that fit smallholders’ needs with respect to the timing of offers, repayment structures, and collateral agreements.Smallholder farmers have limited buffer stocks to cope with volatile food prices and climate uncertainty, and typically have few formal financial services to protect them from risk. The systemic risks of agricultural production jeopardize smallholder farmers’ ability to recoup their investments at harvest. Risk exposure therefore plays an important role in farmers’ agricultural investment decisions, including the use of productive inputs like fertilizer .

Rural communities have developed many informal mechanisms to cope with risk. For example, households may buy or sell assets in response to fluctuations in income , and communities may temporarily assist households experiencing a negative shock like an unexpected medical expense with the expectation that the household will do the same for others in the future . While these strategies are useful, in many cases they are insufficient. Farmers face many sources of uncertainty beyond weather and environmental factors including natural disasters, pests, and disease. Price risk and relationships with output markets can jeopardize farmers’ ability to recoup their investments at harvest, and such risks can depress productive input use. In addition to the risks inherent in the agricultural production status quo, new technologies often bear specific risks, such as uncertainty about how to use the technology correctly and how to market the output16. The classic economic view of poor farmers is that their lack of savings and other resources to fall back on causes them to prefer agricultural approaches with more reliable, but lower, average returns. Households often diversify their sources of income to spread around risk . Farmers may see the adoption of new technologies as risky, especially early in the adoption process when proper use and average yields are not well understood. Technologies that carry even a small risk of a loss may not be worth large expected gains if risks cannot be offset .So, while investments exist that could increase profitability, these may also increase the risks of farming. Behavioral biases also come into play around risky decisions . Risk averse farmers may prefer a more certain, but possibly lower, expected payoff over an uncertain payoff from unfamiliar technologies. Ambiguity aversion can lead farmers to stick to their status quo, preferring known risks with a more familiar probability of gains and losses, rather than unknown risks, even in cases where these choices may actually be less risky. Both risk and ambiguity aversion are important considerations when looking to encourage take-up of novel risk mitigating financial products or technologies . Evidence exists that rural households are able to mitigate idiosyncratic risk , but that rural residents are relatively unprotected against aggregate risks – weather and crop price shocks – common to smallholder rain-fed agriculture in poorly integrated markets . Given extreme weather events can destroy a large portion of harvest across a region, and that such weather events are only increasingly likely given global trends including climate change, there is a need for effective risk-mitigation strategies to protect farmers from these aggregate risks.“Linking credit with insurance has mixed results, suffering from the same demand problems that have beset standalone index insurance. The offering of indemnified loans that interlink an insurance product with credit appears promising, but demand for such loans has been shown to be surprisingly low in the few trials that have tested this mechanism ” . Linking credit with insurance has even been shown to drive down credit demand . Recent research has found that companies that engage in contract farming can be well-positioned to adjust the timing of insurance and payment arrangements to increase take-up. Casaburi and Willis find that when a large private company engaged in contract farming in Kenya offered to provide insurance to sugar cane producers by deducting premiums from farmer revenues at harvest time, take up rates at actuarially fair prices were 71.6%, 67 percentage points higher than the equivalent standardly timed contract.

Juvenile salmon did not exhibit a habitat preference among these three choices

When cages were used, salmon were PIT tagged to track individual fish growth rates within a specific habitat. We consistently found that salmon growth rates in cages placed in flooded in rice fields were higher than growth rates for juvenile Chinook Salmon of comparative life stage in any of the adjacent riverine habitats and in other regions . Growth rates were also comparatively high when free-swimming salmon were introduced into larger-scale, 0.8-ha flooded agricultural fields. These studies were more representative than those using cages of how migrating salmon might use these habitats under natural flow events. For the multiple years that free-swimming salmon were used , they averaged a mean daily growth rate of 0.98mm d−1. Throughout all study years, caged salmon and free-swimming salmon showed very similar growth rates within the same experimental study units, despite the fact that they likely experienced different micro-habitat conditions . This observation suggests that our salmon growth results were not influenced by cage effects, a well-known issue in enclosure studies . To better understand managed floodplain processes across the region, in 2015, salmon were introduced in fields at a variety of locations in the Central Valley with various vegetative substrates: Sutter Bypass , three locations on the Yolo Bypass , and Dos Rios Ranch at the confluence of the Tuolumne and San Joaquin rivers . At all of the locations,strawberry gutter system juvenile Chinook Salmon grew at rates similar to those observed in experiments conducted at Knaggs Ranch in the Yolo Bypass during previous study years.

These results suggest that multiple geographical regions and substrate types can support high growth rates of juvenile Chinook Salmon. A key objective of our work in flooded fields was to determine whether substrate type has a measurable influence on growth and survival of juvenile Chinook Salmon. Substrate and vegetation can be an important micro-habitat feature for young Chinook Salmon , so we posited that there could potentially be some difference in salmon performance across treatments. In 2013, we examined this question across different substrate types in two ways: telemetry studies using PIT tags; and replicated fields. Both approaches indicated that juvenile salmon did not have a clear preference for different substrates, and grew at similar rates across substrates.We monitored the movements and use of PIT-tagged, hatchery-origin juvenile Chinook Salmon for approximately 1 month over fallowed, disced, and rice stubble substrates in two circular enclosures to determine if there was any preferential use. One enclosure included all three substrates, and one contained only disced substrate .Although growth rates were slightly higher in the enclosure that contained all three substrate types, juvenile salmon grew at very high rates, averaging 1.1mm/d regardless of enclosure. These growth rates were higher than other published growth rates for juvenile Chinook Salmon in the Yolo Bypass, and the region generally .Throughout the 2012–2016 study period, we consistently observed that juvenile Chinook Salmon were attracted to sources of inflow, and that this sometimes became the dominant factor in the distribution of salmon on experimental fields or in enclosures.

In the previously described PITtag observations in 2013, salmon in both enclosures positioned themselves nearest the inflow, regardless of surrounding habitat structure . This result is not surprising, given that juvenile stream salmonids commonly adopt and defend flow oriented positions in stream environments for acquisition of drifting food resources. On flooded agricultural fields, this orientation toward flow may not only be related to feeding behavior but may also serve to keep juvenile salmon in habitat areas that are hydrologically connected and have higher velocities. In fact, analyses of the environmental factors that predict movement of large groups of tagged juvenile Chinook Salmon in the Yolo Bypass found that drainage of flooded areas was a reliable predictor of fish emigration at downstream trapping stations . Although juvenile Chinook Salmon growth rates were consistently high across substrates and study years, we observed highly variable survival of salmon, and available evidence from the studies suggests that this was related, at least in part, to differences among years in drainage rates of the study fields and habitat availability on the floodplain at large. For example, survival in 2013 ranged from 0.0% to 29.3% in the replicated fields containing different agricultural substrates. This variability was likely unrelated to substrate type; instead, these low survival rates were most likely a result of very dry conditions across Yolo Bypass and the Central Valley, generally, when record drought conditions prevailed during 2012–2015, which affected water quantity and quality. In 2013, our replicated field study likely presented one of the only wetted floodplain areas for miles around, and thus presented a prime feeding opportunity for avian predators such as cormorants, herons, and egrets. However, when the same set of fields was used in 2016, survival was much higher . This was generally a wetter period, avian predation pressure was reduced, and we more efficiently opened the flash boards to facilitate faster drainage and fish emigration.

Note, however, there were some differences in methodology among years, which may have contributed to survival variability. Taken together, these observations of free swimming salmon survival suggest that field drainage rate, and overall floodplain habitat availability, are important factors for improving survival in managed agricultural floodplain habitats. Our observations of juvenile salmon orientation to flow, and the importance of efficient drainage on survival, reinforce observations from natural floodplains that connectivity between perennial channel habitat and seasonal floodplain habitat is an essential attribute of river-floodplain systems . Connectivity of managed floodplain habitats to unmanaged habitats in the river and floodplain is therefore a foundational condition needed to allow volitional migration of juvenile salmon. Further research is needed to identify how to provide sufficient connectivity to maximize rearing and migration opportunities for wild Chinook Salmon.Natural and managed floodplain habitat is subject to a variety of flow and environmental conditions. Variation in flow and temperature dictates when and where managed agricultural habitats may be accessible and suitable for rearing salmonids, with challenges during both wet and dry years, as well as during warm periods. As noted previously, survival in the replicated fields was variable but generally low. We associate these results with the effects of extreme drought conditions that prevailed during the core of our study from 2012 through 2015. Although our field studies were conducted during a time of year when wild salmon have historically used the Yolo Bypass floodplain , the extreme drought made for warm water temperatures, and resulted in our study site being one of the few inundated wetland locations in the region. As such, avian predators were attracted to the experimental fields, exacerbating salmon mortality during drainage. We observed high concentrations of cormorants, herons, and egrets on the experimental fields, and this concentration increased over the study period. As many as 51 wading birds and 23 cormorants were noted during a single survey. The small scale of our project could have further exacerbated predation issues. This situation highlights the importance of the weather dependent,fodder system for sale regional context of environmental conditions, which govern how and when managed floodplains can be beneficial rearing habitats for juvenile salmon. Under certain circumstances, flooded fields can generate high salmon growth but in other scenarios, these habitats can provide poor environmental conditions for salmonids and/or become predation hot spots. Even during wetter conditions, we found that management of agricultural floodplain habitat was challenging. For example, we had hoped to test the idea of using rice field infrastructure to extend the duration of Yolo Bypass inundation events in an attempt to approximate the longer-duration events of more natural floodplains; that is, through flood extension. As noted by Takata et al. , use of the Yolo Bypass by wild Chinook Salmon is strongly tied to hydrology, and salmon quickly leave river-inundated floodplains once drainage begins. We therefore reasoned that flooded rice fields might provide an opportunity to extend the duration of flooding beyond the typical Yolo Bypass hydrograph. In 2015, a flood extension study was planned but not conducted because drought conditions precluded Sacramento River inflow via Fremont Weir. To test the flood extension concept in 2016, we needed substantial landowner cooperation and assistance to install draining structures that allowed maintenance of local flooding after high flow events. Even then, we found it difficult to maintain water levels and field integrity during the tests. In our case, we were fortunate to have the cooperation of willing landowners. Partnership with landowners was key, and would be critical with any future efforts to test the concept of flood extension. We also planned a similar test in 2017, but high and long-duration flood flows prevented the study from occurring.

Over the 6 years of study, except perhaps for 2013 when we focused on other study priorities, we never experienced ideal conditions to adequately test the flood extension concept. We were either in a severe drought, during which the Yolo Bypass did not flood from the river, or we experienced severe and sustained flooding, which made it impossible to contain flood waters within study fields. Based on these experiences, studying the concept of flood extension appears to depend on the occurrence of moderate flood events at the right time of year , provided fields are appropriately designed to hold water and allow efficient immigration and emigration of potentially large numbers of juvenile salmon. However, significant outreach and communication is necessary with landowners to maintain floodwaters on their fields during the natural drainage period. Because these events cannot be predicted well ahead of time, these communications—and availability of robust infrastructure—need to be constantly maintained even outside the flood extension period. As suggested in the previous section, such potential actions would need to be taken in a way that maintains hydrologic connectivity and salmon access, so that salmon can successfully locate potential managed habitats, use them for rearing, and then successfully emigrate from them at the appropriate time. Timing of such potential manipulations is critical because previous sampling has shown that salmon quickly emigrate from the floodplain during large scale drainage events , leaving relatively low densities of salmon in remaining ponded areas to potentially benefit from flood extension. Although our use of hatchery salmon gave us more experimental options during drought conditions, the use of these fish resulted in additional challenges. Our approach relied on a non-traditional use of hatchery salmon, which required a suite of permits and approvals to execute the project. As noted above, the project coincided with a major drought, so access to hatchery salmon was limited as a result of low salmon population levels. In addition, use of hatchery salmon affected the time-period when we could conduct experimental work. We were unable to test salmon response to early season flooding , because the hatchery salmon were too small to receive coded-wire tags as required under our permit conditions. Similarly, the timing of our work was affected by the availability of holding tanks at our partner hatchery , and by the availability of transport staff and vehicles to move salmon to our study site. While we were able to assess many important biological metrics in our study, direct measurement of the population-level effect of floodplain rearing on agricultural habitats proved elusive. A traditional approach to addressing this question involves inserting CWTs into very large numbers of experimental salmon and estimating the population response from expanded CWT recaptures in the ocean fisheries. Recoveries of CWTs in adult salmon from experimental releases made in the Yolo Bypass have generally been very low , making it difficult to get a high level of resolution with which to reliably compare survival rates, including with values in the literature. Although CWT recoveries could potentially be improved by increasing the number of tagged salmon, the effort required even to collect a single data point would be substantial and is limited by the availability of surplus hatchery salmon. A related issue is that it is difficult to design a survival experiment that provides a useful comparison to other management strategies or migration corridors. For example, it is challenging to assess the incremental survival value of flooded agricultural habitat versus adjacent perennial channels . Telemetry can partially address this issue, but current acoustic tagging technology does not allow estimates of survival once smolts emigrate from the estuary, and is also limited in the size of salmon that can be tagged.

The estimated impact of warming remains robust across all possible combinations

A decline in some commodity markets and a shift in federal crop subsidy programs in the mid 1980s affected different growing regions in different ways. Under these circumstances it would not be surprising if the coefficients on the climate variables varied somewhat over time. In fact, however, they are very robust. Pair-wise Chow tests between the pooled model and the four individual census years in Table 3 reveal that the five climatic variables are not significantly different at the 10 percent level in any of the ten tests. Although we have excluded western counties because their agriculture is dependent on irrigation, what about irrigated areas east of the 100th meridian? To test whether these are affecting our results, we repeat the estimation excluding counties where more than 5% of farmland area is irrigated, and where more than 15% of the harvested cropland is irrigated.We also examine further the influence of population, excluding counties with a population density above 400 people per square mile or a population total above 200,000. The exclusion of the three sets of counties leaves the coefficient estimates virtually unchanged, and the lowest of the three p-values for the test of whether the five climate variables have the same coefficients is 0.85. It is not surprising that excluding irrigated counties east of the 100th meridian has little effect on our regression results,30l plant pot since very few are highly irrigated, and all receive a substantial amount of natural rainfall. Under these circumstances, irrigation is a much smaller supplement to local precipitation, small enough to have little effect on regression results.

By contrast, the p-value for the test of whether the five climate coefficients are the same in counties west of the 100th meridian is 1011. Including western counties that depend crucially on large-scale irrigation significantly alters the equation. To test whether the time period over which the climate variables are calculated makes any difference, we replicate the analysis using as alternatives to the 30-year histories on which the estimates reported in Table 3 are based, 10- and 50-year averages. Neither of the alternatives yields climate coefficients significantly different from the pooled regression results based on the 30-year histories. These tests suggest that our model is stable for various census years, data subsets, and climate histories. Nevertheless, one might wonder whether there could be problems without liers or an incorrect parametrization. We briefly address these concerns. In a test of the robustness of our results to outliers, the analysis is replicated using median regression, where the sum of absolute errors is minimized both in the first-stage derivation of the parameter of spatial correlation and in the second stage estimation of the coefficients. Again, the climatic variables remain robust and are not significantly different. To test the influence of our covariates on the results we follow the idea of Leamer’s extreme bound analysis and take permutations of our model by including or excluding each of 14 variables for a total of 16,384 regressions.No sign switches are observed in any of the five climatic variables, again suggesting that our results are very stable. Further, the peak-level of degree days is limited to a relatively narrow range. We check sensitivity to the assumed length of the growing season by allowing the season to begin in either March, April, or May and end in either August, September, or October.Finally, in order to examine whether the quadratic specification for degree days in our model is unduly restrictive, we estimate a penalized regression spline for degree days 8◦C −32◦C and find that the quadratic approximation is consistent with the data.

Before turning to the determination of the potential impacts of global warming on the agricultural sector of the U.S. economy as measured by predicted changes in farmland values we briefly consider whether farmers’ expectations have changed over the period covered by our study, and whether this may affect our estimates. In the previous section we regressed farmland values on past climate averages, even though farmland values are determined using forward-looking expectations about future climate. The weather in the U.S. over the past century was viewed as a random drawing from what until recently was thought to be a stationary climate distribution. Our own data are consistent with this: the correlation coefficients between the 30-year average in 1968-1997 and the two previous 30-year averages of the century, i.e., for 1908-1937, and 1938-1967, are 0.998 and 0.996 for degree days , 0.91 and 0.88 for degree days , and 0.93 and 0.93 for precipitation variable. Accordingly, when we use the error terms from our regression and regress them on past values of the three climate averages, none of the coefficients is statistically significant. The same result holds if we move to the shorter 10-year climate averages. This suggests that past climate variables are not a predictor of farmland values once we condition on current climate. As pointed out above, consecutive census years give comparable estimates of the climate coefficients in our hedonic equation and none of them are significantly different.18 Similarly, we check whether the aggregate climate impacts for the four emission scenarios in Table 5 change if we use the 1982 census instead of the pooled model. Even though the standard deviations are fairly narrow, t-tests reveal that none of the eight mean impact estimates are significantly different . We conclude that our results are not affected by any significant change in expectations over the study period.

In the calculations which follow we use the regression coefficients from the semi-log model, which we have shown to be both plausible and robust, along with predictions from a general circulation model to evaluate the impacts of climate change. The climate model we use for this analysis is the most recent version of the UK Met Office Hadley Model, HadCM3, recently prepared for use in the next IPCC Assessment Report. Specifically, we use the model’s predicted changes in minimum and maximum average monthly temperatures and precipitation for four standard emissions scenarios identified in the IPCC Special Report on Emissions Scenarios . The chosen scenarios span the range from the slowest increase in greenhouse gas concentrations , which would imply a little less than a doubling of the pre-industrial level by the end of the century, to the fastest , associated with between a tripling and a quadrupling, and include two intermediate scenarios . We use the 1960-1989 climate history as the baseline and calculate average predicted degree days and precipitation for the years 2020-2049 and 2070-2099. The former captures impacts in the near to medium term, while the latter predicts impacts over the longer term, all the way to the end of the century, the usual benchmark in recent analyses of the nature and impacts of climate change.Predicted changes in the climatic variables are given in Table 4. Impacts of these changes on farmland values are presented in Table 5 for both the 2020- 2049 and 2070-2099 climate averages under all four emissions scenarios. Not surprisingly,pots with drainage holes results for the near-term 2020-2049 climate averages are similar under all four scenarios. The relative impact ranges from a 10% to a 25% decline in farmland value, which translates into an area-weighted aggregate impact of -$3.1 billion to -$7.2 billion on an annual basis.Although the aggregate impact is perhaps not dramatic, there are large regional differences. Northern counties, that currently experience cold climates, benefit by as much as 34% from the predicted warming, while others in the hotter southern states face declines in farmland value as high as 69%. Similarly, average relative impacts are comparable across scenarios for the individual variables degree days and degree days , but again there are large regional differences. The effect of an increase in the latter variable is always negative because increases in temperature above 34◦C are always harmful, while the effect of the former variable depends on whether a county currently experiences growing conditions above or below the optimal number of degree days in the 8 32◦C range.The impact estimates for the longer-term 2070-2099 climate average become much more uncertain as the range of predicted greenhouse gas emission scenarios widens. Predicted emissions over the course of the century are largely driven by assumptions about technological change, population growth, and economic development, and compounding over time leads to increasingly divergent predictions. The distribution of impacts now ranges from a average decline of 27% under the B1 scenario to 69% under the A1FI scenario. At the same time, the sharp regional differences observed already in the near to medium term persist, and indeed increase: northern counties generally benefit, while southern counties generally suffer.

Anexception is found in Appalachia, characterized by a colder climate than other counties at a similar latitude. Regional differences widen as counties with a very cold climate can benefit from continued warming: the maximum positive relative impact now ranges from 29% to 52%. However, the total number of counties with significant gains decreases in most scenarios. For the 2020-2049 time span, 446, 126, 269, and 167 counties, respectively, show statistically significant gains at the 95% level for the scenarios given in Table 4. These numbers change to 244, 202, 4, and 26 for the 2070-2099 time span. By the same token, the number of counties with statistically significant loses increases from 1291, 1748, 1762, and 1873 for the 2020-2049 time span to 1805, 1803, 2234, and 2236 for the 2070-2099 time span. The regional distribution of impacts is shown in Figure 1 for counties with significant gains and loses under the intermediate B2-scenario. The predicted changes are also closer to those in another general circulation model, the DOE/NCAR Parallel Climate Model , which we use as an alternative because it is considered a low-sensitivity model, as opposed to the mid-sensitivity HadCM3; for a given CO2 scenario the temperature changes are lower under the PCM than under the Hadley model. We replicated the impact analysis using the PCM climate forecasts in the appendix available on request. Not surprisingly, the predicted area-weighted aggregate damages are lower. However, the regional pattern remains the same: out of the 73% of counties that have statistically significant declines in farmland values under all four Hadley scenarios by the end of the century, 73% still have significant losses under the PCM A1FI model and 0.7% switch to having significant gains. The magnitude of temperature changes simply shifts the border between gainers and losers. Some of the predicted potential losses, in particular for the high emissions scenario in the later period toward the end of the century, are quite large. However, average temperature increases of 7◦Cwould lead to the desertification of large parts of the South. A way of interpreting the results that places them in the context of other studies and also highlights the role for policy, is that if emissions are fairly stringently controlled over the course of the coming century, as in B1, such that atmospheric concentrations of greenhouse gases remain alittle below double the pre-industrial level, predicted losses to agriculture, though not trivial, are within the range of the historically wide cyclical variations in this sector. If on the other hand concentrations climb beyond three times the pre-industrial level, as in A1F1, losses go well beyond this range. This suggests a meaningful role for policy involving energy sources and technologies, since choices among feasible options can make a major difference. A complete impact analysis of climate change on U.S. agriculture would require a separate analysis for counties west of the 100th meridian. Based on the information presently available, we do not believe the impact will be favorable. A recently published study down scales the HadCM3 and PCM predictions to California and finds that, by the end of the century, average winter temperatures in California are projected to rise statewide by about 2.2 ◦C under the B1 scenario and 3.04.0 ◦C under the A1FI scenario . Summer temperatures are projected to rise even more sharply, by about 2.2 4.6 ◦C under the B1 scenario and 4.1 8.3 ◦C under the A1FI scenario. Winter precipitation, which accounts for most of California’s water supply either stays about the same or decreases by 15-30% before the end of the century.

Planned land use projections are categorized in terms of planned land use designation

Areas showing “no benefit” or that were not included in statewide calculations were not included in mapping analyses. These areas of no benefit are likely due to a combination of factors, including soil properties such as high clay or sand soils, and organic-rich soils. To help evaluate a more accurate representation of agricultural lands in San Diego unincorporated county, this study combines two agricultural data sources. These sources include the Farmland Mapping and Monitoring Program and agriculture listed in SANDAG’s current land use. FMMP aims to show the relationship between the quality of soils for agricultural production and the land’s use for agricultural, urban, and other purposes . Agricultural land is ranked as “unique”, “prime”, “grazing lands”, and “important” locally and/or statewide based on soil quality as a metric for quality and irrigation status as a metric for status of use . While FMMP helps identify the quality and location of the region’s designated farmland, it is important to consider that FMMP may under represent the total agricultural land that exists in San Diego. Furthermore, the lands that are not represented and/or classified with FMMP are important features of the region, and are thus important to include in analysis. Current land use maps from the county’s data portal utilize “extensive” and “intensive” to illustrate current agricultural land . Extensive and intensive lands are combined with FMMP lands to provide a more accurate and complete context of existing agriculture. Combined,good drainage pots these lands represent the agricultural lands study area used throughout this report. Several conservation programs exist in efforts to preserve San Diego’s agricultural lands.

Easements and formal protection of land listed on the California Conservation Easement Database and California Protected Areas Database are included in analyses to understand the areas of agricultural land study area that are currently protected . Planned land use layers from SANDAG Regional GIS Data Warehouse are created for the Regional Growth Forecast, outlining projected growth for the San Diego region to suitable areas.Non-agricultural land use types are separated as “urban”, including commercial, industrial, and/or urban designation, or non-urban, including water and open space/parks. Land use data planned for urban use by 2050 are overlaid with existing agricultural lands to identify agricultural lands threatened by future urban development . The county-averaged difference from the baseline scenario is -0.25 inches per year for CWD and 0.25 in/yr for AET. This represents that soil management of 3% added SOM can yield a -0.25 change in CWD from the baseline average of 15 and a 0.25 change in AET from the baseline average of 10 in/yr. Additionally, soil moisture has an average change of 1.69 from the baseline average of 9.1 . The entire San Diego region shows a total hydrologic benefit area of 590,582 acres, with 14% of benefit area within the incorporated county and 86% of benefit area within the unincorporated county. These results indicate that many areas in San Diego have the potential to experience increase in forage production , reduced landscape stress and irrigation demand , and increased hydrologic resilience to climate change . Analyses show that CWD and AET have the most significant changes under a +3% SOM management scenario, and thus, the hydrologic benefit index is heavily reflective of these two variables. Of the total agricultural land study area, a total area of 238,457 acres agricultural lands fall within an area of hydrologic benefit.

A total of 223,383 acres of FMMP lands coincide with areas hydrologic benefit from increased SOM , representing 66% of total FMMP lands. Notably, FMMP “Farmland of Statewide” and “of Local Importance” land classes show a total benefit area of 102,549 acres. Planned city land use projections show further increases in urban extents by 2050, with 45% of non-agricultural lands planned for commercial, industrial, or commercial land uses. As these urban areas expand, agricultural lands are increasingly at risk of conversion. Figure 17 illustrates the total area of current agricultural land that could be lost by 2050 based on planned urban development. These losses can be quantified in terms of the potential hydrologic benefit estimated for soil management on these agricultural lands that will be lost if converted to urban use. The lost potential hydrologic benefit associated with soil management on current agricultural lands spans a total area of 144,804 acres, representing a 65% loss of the total potential hydrologic benefit on current agricultural lands. Within the total area of lands at-risk of urban development, 13% are listed as protected under CCED and/or CPAD. San Diego’s agricultural community is especially sensitive to the impacts of a changing hydroclimate, making water resources a main area of focus for climate resiliency in the region. The region’s agricultural lands, and the multifaceted benefits they provide, are utilized across society. Thus, as the county continues to expand its efforts in climate mitigation, partnering with agricultural partners presents a key opportunity to ensure a resilient region . With model agreement over increased precipitation variability, and resulting changes in water availability, quality, and quantity over time, it is critical that a spectrum of strategies be implemented that can especially buffer the region’s changing water resources. As a state faced with distinct water-resource challenges, there is an increased need for planning and management decisions based at the local and regional level.

While coarse spatial resolution model projections of temperature and precipitation trends provide much of the available information for land and resource managers and climate assessments, recent modeling developments, such as the BCM, greatly enhance available data . Data on the response of these hydrologic variables presents highly valuable information on quantification of recharge, runoff, irrigation need, landscape stress, and the spatial distribution of hydrologic processes throughout a watershed . These modeling capabilities can now model the spatial distribution of hydrologic processes throughout a watershed at fine-scales, producing much needed high-resolution data and confident estimates that previous modeling lacked. There is a much needed opportunity to use these highly detailed and spatially explicit model projections for local resource management decisions and policy development .While the BCM’s advanced modeling capabilities improves the state’s understanding of hydrologic processes, soil management, and sequestration potential across the state’s terrestrial landscapes , it can also be used for informing local, regional, and water-shed specific assessments. The grid based regional water balance model can provide valuable insight on the role of precipitation in San Diego’s terrestrial ecosystems . Modeling the dynamic relationship between the pathways of precipitated water with landscape features can allow for more precise projections of both historical and future climate-hydrology assessments . Application of the BCM for San Diego provides a quantification of the benefit of carbon farming practices in both 1981-2011 assessments and future projections. This analysis exhibits that San Diego’s agricultural lands have the potential to improve hydrologic conditions with strategic management. Increases in WHC can allow for more water to stay in the watershed, maintaining base flow in low-flow periods, groundwater infiltration and recharge, while also minimizing the impacts of peak runoff during extreme precipitation events . For the unincorporated county, which shows a significant portion of the potential hydrologic benefit, soil management practices could significantly reduce water related challenges. Given that 65% of the unincorporated area is considered a groundwater-dependent area and subject to localized groundwater availability problems, practices that enhance hydrologic processes and contribute to overall water-use efficiency could greatly benefit this region. Most notably, San Diego could experience significant decreases in CWD in addition to increases of AET. As a result,best pots for blueberries the farming community could see improvements in soil moisture, irrigation costs, landscape stress, net primary productivity . Thus, these potential improvements ultimately enhance resilience to droughts and extreme events . Results further illustrate that, even in scenarios with projected climate change impacts, there are many areas throughout San Diego with potential increases in AET and decreases in CWD, if soil management practices are implemented. Projections highlight the ability of these practices to buffer the impacts of future drought conditions. Combining hydrologic benefit estimates with knowledge of existing regional agricultural lands can inform strategic, on-the-ground implementation efforts and direct carbon farming farming projects.

Existing agricultural lands constitute a large portion of the total area of estimated hydrologic benefit with increased SOM. Results can provide informed prioritization of feasible lands and management practices in addition to natural resource allocation across the region. Areas that intersect both hydrologic benefit and existing agricultural land can be used to identify areas where carbon farming efforts could be most attainable and readily employed. Translating potential hydrologic changes to their associated economic and productivity benefit provides a critical link between scientific research and practical on-farm application. Carbon farming practices aim to not only build SOC levels, but to ensure that these pools remain in the soil for many years. Thus, it is important to consider the agricultural areas vulnerable to land conversion for implementation of carbon farming projects, as these lands may not be able to sequester carbon for the long-term if converted. Areas at the intersection of current agricultural land and future urban development can be used to identify areas where demonstration sites and farming programs may be short-lived. Analyses identify the areas in which implementation of carbon farming sites and programs may not be able to yield benefits overtime, if not designated for production in years to come. With conversion of farmland in recent years and continued plans for urban development, there is a need to invest in programs that sustain the existing value of these lands while also supporting additional growth. Analyses indicate that only 13% of the total threatened agricultural lands are protected under CCED and/or CPAD conservation plans. Threatened areas showing the highest benefit values can advise future preservation strategies to target these priority lands. As the county faces demand for development, how we reconcile these pressures with the importance of agricultural lands is a critical piece to San Diego’s ultimate climate resilience. In light of these trade-offs, results such as these can help tell the story of these agricultural lands and the case for their preservation. Identifying these opportunities through scientifically based analyses helps portray the potential of carbon farming in the region, and articulate the value agricultural lands hold for their sequestration potential and co-benefits. On many fronts, California has adopted the role of a global leader in climate action, implementing an array of proactive technical instruments and political strategies ranging from the local to federal level. As agriculture is a critical backbone of the state’s booming economy, it is necessary that California put agriculture at the forefront of climate planning . Recognizing the benefits that well-managed soils provide, carbon farming has recently gained attention throughout the state as a promising form of climate adaptation and mitigation. However, for carbon sequestering practices to be effective, feasible, and widespread, it requires collaboration among interdisciplinary stakeholders. It is necessary that policy makers, environmental advocates, scientists, farmers, and economists join forces to spearhead these opportunities. Given the great diversity within the state’s 58 counties, appropriate soil management practices look different for each region. Additionally, regional climate impacts and specific areas of vulnerability differ between regions, and this may translate to unique goals. Thus, the tools and practices needed to address specific regional context, will vary. Home to the greatest number of small farms and certified organic farms of any county in the U.S. , San Diego’s agricultural setting presents unique opportunities and strengths for addressing climate challenges through widespread implementation of sustainable agriculture . Regional application of scientific tools, such as the BCM, can be used as a basis for advising interdisciplinary efforts to address specific county needs, such as water resources. While economic programs and supportive partnerships are essential for promoting the adoption of carbon farming practices, the BCM is a critical component to maximizing opportunities. With advanced science and modeling capabilities, a supportive and proactive network of entities, and the political will and economic incentives in place, opportunities for increasing carbon sequestration in California are more pertinent now than ever. The alignment of these factors makes this the opportune time for San Diego to embrace and advance powerful farming strategies.

Angular sensors can also be used in some cases to measure linear velocity

In the marketplace, people generally care more about the sensed quantity and how well the sensor performs for their specific application, while academic researchers and sensor designers are also interested in how the sensor measures the quantity. This section is concerned with the latter. The means by which a sensor makes a measurement is called the transduction mechanism. Transduction is the conversion of one source of energy to another, and all sensors utilize some form of energy transformation to make and communicate their measurements. It should be noted that this is not an exhaustive list of transduction mechanisms. This list only covers a small fraction of the many universal laws describing the conversion of one energy form to another. Rather, this list focuses on transduction principles that describe converting one energy type to electrical energy. This is because all electrical sensors must take advantage of at least one of these mechanisms, and often more. What this list does not cover is transduction from any energy type to another type other than electrical. For example, the thermal expansion principle that governs the liquid-in-glass thermometer example at the beginning of this chapter is not described,plastic plant containers because that sensor operates on the principle of converting thermal energy to gravitational energy. This list also does not include modes of biological or nuclear signal transduction mechanisms for the sake of brevity.A potentiometric sensor measures the open-circuit potential across a two-electrode device, such as the one shown in Figure 1.3C. Similar to amperometric sensors, the reference electrode provides ‘electrochemical ground’.

The second electrode is the ion-selective electrode , which is sensitive to the analyte-of-interest. The ISE is connected to a voltage sensor alongside the RE. The voltage sensor must be very sensitive and have a high input impedance, allowing only a very small current to pass. There are four possible mechanisms by which ionophores can interact with ions: dissociated ion exchange, charged carrier exchange, neutral carrier exchange, and reactive carrier exchange. Dissociated ion-exchange ionophores operate by classical ion-exchange over a phase boundary, in which hydrophilic counter-ions are completely dissociated from the ionophore’s lipophilic sites, preserving electroneutrality while allowing sites for the ions in solution to bind to. Charged-carrier ionophores bond with opposite-charged ions to make a neutrally charged molecule, and the ions with which they bond are determined by thermodynamics and the Hofmeister principle. Neutral carrier ionophores are typically macrocyclic, where many organic molecules are chained together to form a large ring-like shape whose gap is close to the molecular radius of the primary ion. Finally, reactive carrier ionophores are mechanistically similar to neutral carrier ISEs, with the only difference being that reactive carriers are based on ion-ionophore covalent bond formation while neutral carriers are based on reversible ion-ionophore electrostatic interaction. Neutral carrier and reactive carrier ion exchange both are dependent on the mobility, partition coefficients, and equilibrium constants of the ions and carriers in the membrane phase. Some examples of the chemical structures of ionophores are shown in Figure 1.4. Positional sensors are some of the most common in the world, and there are likely several within reach of you as you read this. Smartphones and wearable health devices utilize various sensors to track how many steps you take in a day, the intensity of your workouts, and what route to take home from work. Displacement, velocity, and acceleration can sometimes all be found with a single device, as each quantity is the time-derivative of the prior.

In practice, however, it is common to use separate devices for any of these three measurements because the cost of these sensors is relatively cheap, and it is easy to build systematic errors if the timing mechanism is off. The measurements for displacement, velocity, and acceleration must be made with respect to some frame of reference. For example, consider a group of people playing a game of billiards in a moving train car. Observers on the train platform would assign different velocity vectors to the balls during play than observers on the train. Displacement and angle sensors commonly use potentiometers when the value is expected to be suitably small. A potentiometer transduces linear or angular displacement to a change in electrical resistance. For a displacement sensor, a conductive wire is wrapped around a non-conductive rod, and a sliding contact is attached to the object whose displacement is being measured. A known voltage is supplied across the wound wire, and as the object moves, the sliding contact will make contact with the wound wire, shorting that part of the circuit. Then, the output voltage across the wire is measured, which will be proportional to the amount of the wire shorted by the sliding contact, which is proportional to the object’s displacement. The same principles are applied to measure the angle for a potentiometer operating in angular displacement mode. There are other methods for measuring displacement, but these methods can also be used to measure velocity, as described in the following section. Velocity measurements utilize a variety of approaches ranging from radar, laser, and sonic sensor systems. These types of sensors use one of these modulating signals to send a sound or light wave in a direction and measure the time it takes to bounce off of a surface, return to the sensor, and activate a sensing element that is sensitive to that modulating signal. Using this, the device can calculate the distance between the sensor and the reflecting object by dividing lag time by the wave speed. Then, because these devices often operate at a high frequency, the measurement can be made again, and the change in distance divided by the change in the time between measurements yields a linear velocity.In a car, for example, the speedometer is a linear velocity sensor, but it makes its measurement using an angular velocity sensor on the drive shaft and calculates the linear velocity from the assumed tire size.

Acceleration measurements are most commonly made with accelerometers. Accelerometers are most commonly MEMS devices that are extraordinarily cheap, have a low-power requirement, and utilize the capacitance transduction mechanism. The charged electrode of an interdigitated parallel-plate capacitor structure is vibrated at a high mechanical frequency. Then, when acceleration occurs, if it is perpendicular to the gap between the two capacitor plates, the force from the acceleration will cause the moving electrode of the parallel-plate capacitor to deflect towards the other plate, changing the space of the gap between the two, thereby changing the measured capacitance. The operating principle of most pressure sensors is based on the conversion of a pressure exertion on a pressure-sensitive element with a defined surface area. In response, the element is displaced or deformed. Thus, a pressure measurement may be reduced to a measurement of a displacement or a force that results from a displacement. Because of this, many pressure sensors are designed using either the capacitive or the piezoresistive transduction mechanisms. In each, a deformable membrane is suspended over an opening, such that the pressure on one side of the membrane is controlled while the pressure on the other side is the subject of the measurement. As the pressure on the measurement side changes, the membrane will deform proportionally to the difference in pressure. For a piezoresistive transducer, the membrane is designed to maximize stress at the edges, which modulates the resistance proportional to the deformation. For a capacitive transducer, the membrane is made of or modified with a conductive material, while a surface on the pressure-controlled side of the membrane is also conductive, and the pair act as a parallel-plate capacitor. Then, the membrane is designed to maximize deflection at the center of the membrane,blueberry container thereby changing the electrode gap and capacitance.Practically speaking, a sensing element does not function by itself. It is always a part of a larger ‘sensor circuit’: a circuit with other electronics, such as signal conditioning devices, micro-controllers, antennas, power electronics, displays, data storage, and more. Sensor circuits fit within the broader subject of systems engineering, which is a vast field in its own right. Figure 1.5 shows one possible sensor circuit configuration. Depending on the design of the circuit and which components are included in it, the signal that is output by the sensing element might be conditioned to the specifications of a connected micro-controller, saved onto a flash drive, shown on a display, and sent to a phone, saved on a remote server, or many other possibilities. Rather than discuss all possible sensor systems and circuit designs, we have selected the most common – and arguably most essential – components in any given sensor system and summarized them in this section.

In some form or another, all sensor circuits require power to operate. The components of a sensor circuit that generate, attenuate, or store energy to power the other circuit components are called power electronics. This may include batteries, energy harvesters, and various power conditioning devices. A sensor circuit can be made passive, where there is no energy storage within the circuit. The concept is similar to passive sensing elements described in section 1.2: passive sensor circuits use the naturally available energy to operate. This can be done if the quantity that is being measured can also be harnessed to power the device, such as light powering a photovoltaic sensing element. If there is no passive power generation, power electronics are vital for a sensing circuit’s function. This could be as simple as a coin-cell battery connected to the micro controller’s power I/O pins or as complex as a circuit with multiple energy harvesting and energy storage modalities. A sensor is not a sensor if it does not communicate its measured signal to another person or device. Communication electronics are what fulfill this function. Communication electronics can be wired or wireless. When communicating data to a person, wired communications electronics could be displays or speakers that communicate the data through images or audio. When communicating data to another computer, wired communication electronics come in the form of a ‘bus’, a catch-all term for all the hardware, wires, software, and communication protocols used between devices. At the time of this writing, wireless communications must be between the sensor circuit and another electronic device, though perhaps in future years, technology will develop a way for people to directly interface with wireless data transfer. In the meantime, wireless communications generally incorporate an antenna that attenuates an electrical signal into a directional RF frequency following one of many wireless communication protocols such as WiFi, Bluetooth, or RFID.In science and engineering, ‘error’ does not mean a mistake or blunder. Rather, it is a quantitative measurement of the inevitable uncertainty that comes with all measurements. This means errors are not mistakes; they cannot be eliminated merely by being careful. All sensors have some inherent error in their measurement. The best that one can hope for is to ensure that the errors are minimized where possible and to have a reasonable estimate of the magnitude of the error. One of the best ways to assess the reliability of a measurement is to perform it several times and consider the different values obtained. Experience has shown that no measurement – no matter how carefully it is made – will obtain the same values. Error analysis is the study and evaluation of uncertainty in a measurement. Uncertainties can be classified into two groups: random errors and systematic errors. Figure 1.8 highlights these two types of error using a dartboard example. Systematic errors always push the measured results in a single direction, while random errors are equally likely to push the results in any direction. Consider trying to time an event with a stopwatch: one source of error will be the reaction time of the user starting and stopping the watch. The user may delay more in starting the stopwatch, thereby underestimating the duration of the event, but they are equally likely to delay more in stopping the stopwatch, resulting in an overestimate of the event. This is an example of random uncertainty. Consider if the stopwatch consistently runs slow – in this case, all events will be underestimated. This is an example of systematic uncertainty. Systematic errors are hard to evaluate and sometimes even difficult to detect. However, the use of statistics gives a reliable estimate of random error. In the kingdom of electronics, silicon reigns.

The comparable average for California was 7.4 percent of net farm income

An additional way to indicate the relative independence of California agriculture from direct government payments is to look at the share of net farm income made up of direct government payments. Over the period 1990–2000, direct government payments to U.S. producers were 28.3 percent of net farm income.Figure 20 shows annual ratios over the period 1960–2000.15 Direct government payments constituted 49 percent of U.S. net farm income in 2000 and 12 percent of California net farm income. Direct government payments increase the fixed cost of agricultural production without any corresponding increases in productivity .16 In the U.S. heartland , direct government payments account for nearly a quarter of the value of farmland . A recent study of soybean production in Argentina and Brazil concluded that production costs were 20 to 25 percent lower than in the U.S. heartland even though variable input costs per acre were lower in the U.S. . Annual land costs were as much as $80 per acre higher in the U.S. Thus, higher capitalized asset values affect competitiveness. California agriculture is more flexible and more responsive to changes in market conditions with its managerial ability to meet market driven domestic and worldwide consumer demands. Part of that flexibility and responsiveness comes from less reliance on direct government payments. Bottom Line: California agriculture is growing more rapidly than U.S. agriculture, is more flexible in selecting production alternatives, is more responsive to market driven demand signals,plastic pots for planting and is significantly less vulnerable to federal budget cuts. Every one of these attributes is a plus.

In the 21st Century, the three most important markets for California agriculture will be California, the United States, and higher-income, developing countries. All will continue to experience significant population growth . While projected growth in California to 2040 will not be as rapid as in the last 40 years , it will still be substantial—an increase of more than 24 million customers compared to a smaller increase in the preceding 40-year period. For the U.S. market, projected growth is slightly higher in the next 40 years . Most important, U.S. growth represents an increase of an additional 105 million customers, a larger growth increment than for the preceding 40-year period. As noted earlier, global population will increase by around 2.8 billion people with the majority residing in developing countries. A further plus is that their incomes should also be growing rapidly. Bottom Line: California agriculture is well positioned to take advantage of continued growth in state, national, and global population with parallel growth in incomes.California agriculture has always been vulnerable to its external environment precisely because it is demand-driven. Given that it produces predominantly income-sensitive products, growth, recession, depression, and global economic events all potentially cause significant changes in prices. This fact, coupled with a rising share of California output being perennial crops and livestock, means that the potential for boom or bust cycles is probably rising. Thus, the operative question is whether the external environment is becoming more volatile with increased global interdependence along with the rising dependence of all nations on trade. Leaving aside war and massive natural disasters , lowered trade barriers and freely functioning financial markets should increase international market stability compared to a world of protection and controlled financial flows. On the other hand, it is less and less possible for nations to isolate themselves from international economic events.

Bottom Line: While there is no strong evidence that global markets are becoming less stable, it is possible that, as individual countries liberalize, domestic price instability could increase, presenting additional challenges to farmers, growers, and ranchers.California agriculture grew very rapidly over the past half-century. Real value of production increased 70-fold. Agricultural production is now widely diversified to more than 350 commercial plant and animal products, exhibiting a constantly shifting composition and changes in the location of production, all abetted by growing demands for its products and rapid science-based technological changes. California agriculture is strongly buffeted by growing urban pressures for availability of key natural resources—reliable water supplies and productive land. Relentless pressure from environmental and other non-agricultural interests remain with respect to water quality, chemical contamination, air pollution, wildlife and aquatic habitats, and worker safety in the forefront. Agricultural prices clearly became more volatile after the global instability of the early 1970s. As agriculture became more complex internally, both technically and economically, it also became more interdependent with the rest of the economy and the world. It now purchases virtually all of its variable inputs from the non-agricultural economy and has a massive need for credit—short-term, long-term, and, increasingly, intermediate credit. It has probably become more export dependent despite the enormous growth of the California consumer market. In sum, it is more dynamic, more complex, more unstable, and more diverse, thus making California agriculture more vulnerable to external events. At many critical points in California history, California agriculture has been written off, but these periods of difficulty have been interspersed with more numerous periods of explosive growth . The share of perennials, or multiyear-production-cycle products, increased as California agriculture moved away from production of annual field crops and canning vegetables and shifted toward tree nuts, fresh fruits, and wine grapes. The frequency and amplitude of product price cycles seemed to increase. For example, an overabundance of average-quality wine grapes is occurring as recent plantings have come to harvest maturity.

There have been cycles in other products, such as prunes, cling stone peaches, and raisin grapes. The first years of the 21st Century are only the second time in history that low prices occurred across the entire product spectrum. The first was during the long-lasting Great Depression. But already in 2003 and at the beginning of 2004 there are signs of improvement in some prices, promising an improved economy.The idea of creating a new generation of agricultural system data, models and knowledge products is motived by the convergence of several powerful forces. First, there is an emerging consensus that a sustainable and more productive agriculture is needed that can meet the local, regional and global food security challenges of the 21st century. This consensus implies there would be value in new and improved tools that can be used to assess the sustainability of current and prospective systems, design more sustainable systems, and manage systems sustainably. These distinct but inter-related challenges in turn create a demand for advances in analytical capabilities and data. Second, there is a large and growing foundation of knowledge about the processes driving agricultural systems on which to build a new generation of models . Third, rapid advances in data acquisition and management, modeling, computation power, and information technology provide the opportunity to harness this knowledge in new and powerful ways to achieve more productive and sustainable agricultural systems . Our vision for the new generation of agricultural systems models is to accelerate progress towards the goal of meeting global food security challenges sustainably. But to be a useful part of this process of agricultural innovation, our assessment is that the community of agricultural system modelers cannot continue with business as usual. In this paper and the companion paper on information technology and data systems by Janssen et al. , we employ the Use Cases presented in Antle et al. , and our collective experiences with agricultural systems, data, and modeling, to describe the features that we think the new generation of models, data and knowledge products need to fulfill this vision. A key innovation of the new generation of models that we foresee is their linkage to a suite of knowledge products – which could take the form of new, user-friendly analytical tools and mobile technology “apps” – that would enable the use of the models and their outputs by a much more diverse set of stakeholders than is now possible. Because this new generation of agricultural models would represent a major departure from the current generation of models,plant pot drainage we call these new models and knowledge products “second generation” or NextGen. We organize this paper as follows. First, we discuss new approaches that could be used to advance model development that go beyond the ways that first generation models were developed, and in particular, the idea of creating a more collaborative “pre-competitive space” for model development and improvement, as well as a “competitive space” for knowledge product development. Then we describe some of the potential advances that we envisage for the components of NextGen models and their integration. We also discuss possible advances in model evaluation and strategies for model improvement, an important part of the approach. Finally, we discuss how these ideas can be moved from concept to implementation.A first step towards realizing the potential for agricultural systems models is to recognize that most work has been carried out by scientists in research or academic institutions, and thus motivated by research and academic considerations more than user needs.

A major challenge for the development of a new generation of models that is designed to address user needs, therefore, is to turn the model development process “on its head” by starting with user needs and working back to the models and data needed to quantify relevant model outputs. The NextGen Use Cases presented in Antle et al. show that most users need whole-farm models, and particularly for smallholder farms in the developing world, models are needed that take into account interactions among multiple crops and often livestock. Yet, many agricultural systems models represent only single crops and have limited capability to simulate inter-cropping or crop-livestock interactions. Why? One explanation is that many models were developed in the more industrialized parts of the world where major commodity crops are produced. Another explanation is that models of single crops are easier to create, require less computational resources, and are driven by a smaller set of data than models of crop rotations, inter-crops or crop-livestock systems. Additionally, researchers are responding to the incentives of scientific institutions that reward advances in science, and funding sources that are more likely to support disciplinary science. Component processes within single crops, or single economic outcomes, are more easily studied in a laboratory or institutional setting, and may result in more publishable findings. Producing useful decision tools for farmers or policy decision-makers is at best a secondary consideration in many academic settings. The need for more integrated, farming-system models has been recognized by many researchers for several decades, for example, to carry out analysis of the trade offs encountered in attempts to improve the sustainability of agricultural systems . For example, Antle and Capalbo and Stoorvogel et al. proposed methods for linking econometrically estimated economic simulation models with biophysical crop simulation and environmental process models. Giller et al. describe a complex bio-physical farming system modeling approach, and van Wijk et al. review the large number of studies that have coupled bio-physical and economic models of various types for farm-level or landscape-scale analysis. More recent work by AgMIP has developed software tools to enable landscape-scale implementation of crop and livestock simulation models so that they can be linked to farm survey data and economic models . While these examples show that progress has been made in more comprehensive, integrative approaches to agricultural system modeling, these modeling approaches are more complex and have high data demands, thus raising further challenges to both model developers and potential users. As we discuss below, methods such as modularization may make it possible to increase model complexity while having models that are relatively easy to understand and use. Other methods, such as matching the degree of model complexity to temporal and spatial scales, also can be used. Section 3.8 further discusses issues of model complexity and scale. While it is clear that model development needs to be better linked to user needs, it is also important to recognize that science informs stakeholders about what may be important and possible. Who imagined even a few years ago that agricultural decision support tools would use data collected by unmanned aerial vehicles linked to agricultural systems simulation models? So while model and data development need to be driven by user-defined needs, they must also be forward-looking, using the best science and the imaginations of creative scientists.As Jones et al. describe in their paper on the historical development of agricultural systems models, existing models evolved from academic agronomic research.