Our results have numerous practical applications for commercial cotton growers

Cotton species was modeled as a fixed effect, since there are only two possible categories – not enough to meaningfully estimate a random effects distribution. We also included 15 real-valued fixed effect predictor variables that indicate the number of fields, out of the 8 surrounding fields, planted with each of the 15 crops we analyzed. The goal was to control for effects of the surrounding landscape, and thereby avoid spurious correlations between rotational history and yield. Our Bayesian modeling approach required the specification of priors for all parameters whose posteriors were estimated using MCMC. Non-informative priors were used for all fixed effects. The random effects for both field and year were assumed to follow a normal distribution with mean 0 and variance hyperparameters estimated from the data. Since the support of variance parameters is constrained to positive realnumbers, non-informative inverse gamma distributions with shape and scale parameters set to 0.001 were used as the prior for the variance parameter of the top-level stochastic node, and as the priors for the variance hyperparameters of the field and year random effects distributions. Model 2. To help us understand whether any effects of the crop grown in the field the previous year on cotton yield could be due to effects on L. hesperus, we fit the same model as Model 1, but with average June L. hesperus abundance as the response variable. Model 3. Next, to formally assess whether there was an association between the effects of crop rotation on yield and the effects of crop rotation on L. hesperus abundance, we performed a linear regression of the estimated effects on yield against the estimated effects on L. hesperus abundance .

Noninformative N priors were used for the mean and intercept,container raspberries and a noninformative inverse gamma distribution with shape and scale parameters set to 0.001 was used as the prior for the variance. Model 4. A great deal of experimental evidence has demonstrated that crop rotation leads to increased yield compared to successive plantings of a single crop; therefore, we explored whether or not a yield loss was incurred by cotton crops grown in fields where cotton was grown in previous years. For the 782 fields that had complete crop rotational records for the previous 4 years, we calculated the number of consecutive cotton plantings in the 4 years preceding the focal cotton crop. We then fit a model, with yield as the response variable, using the number of consecutive prior cotton plantings as a predictor. Field, year, and cotton type were included as they were in Models 1 and 2. Since the number of prior consecutive cotton plantings could be correlated with the number of cotton fields in the surrounding landscape during the focal year, we avoided a possible spurious correlation between consecutive cotton plantings and yield by also including a fixed effect for the number of cotton fields in the 8 fields adjacent to the focal field. We chose not to explore rotational histories of specific crops for longer than one previous year, since the number of possible rotational histories becomes very large and the number of records for each possible history becomes too small to allow for robust statistical analysis. Model 5. To see if the number of consecutive years of cotton cultivation preceding the focal year was associated with June L. hesperus densities, we fit the same model as Model 4, but with June L. hesperus as the response variable.

Capitalizing on a large existing set of crop records from commercial cotton fields in California, we employed an ecoinformatics approach to explore the effects of crop rotational histories on cotton yield. Our hierarchical Bayesian analyses revealed evidence that several crops, when grown in the same field the year before the focal cotton planting, were associated with either decreased or increased cotton yield , and either increased or decreased early season densities of the pest L. hesperus . Furthermore, crops associated with decreased yield were generally also associated with increased L. hesperus densities, while those associated with increased yield were also associated with decreased L. hesperus densities . These results suggest a possible mechanism for the observed yield effects of these rotational histories. Since L. hesperus preferentially attacks certain crops, a field cultivated with a crop that is heavily attacked by L. hesperus may, if L. hesperus disperse from the focal field, increase the abundance of L. hesperus in nearby fields. These populations may subsequently attack the crop planted in the focal field the following year, explaining the increase in early-season L. hesperus densities that we detected following certain crops. In turn, these increased L. hesperus populations may exert strong herbivorous pressure on focal cotton crops, possibly explaining the corresponding decrease in yield. We believe that the effect of rotational history on early-season L. hesperus likely operates at a landscape scale that is larger than the within-field scale. If cotton was grown in a field the previous year, then farmers in the San Joaquin Valley are required to maintain a 90-day plant-free period prior to 10 March of the following year. This prevents L. hesperus, which overwinter as adults on live host plants, from overwintering in a focal field where cotton was grown the year before.

If a crop other than cotton was grown the previous year, then it could be possible for L. hesperus to overwinter in the focal field on residual plant or weed populations; however,since fields are completely plowed prior to planting cotton in the spring, L. hesperus adults would still need to temporarily leave the focal field. Therefore, we believe that the preferred host crops for L. hesperus increase L. hesperus populations at a landscape scale. Then, when cotton, another target of L. hesperus, is planted in the same field the following year, the cotton field is attacked by this regional population. If regional populations are already large due to lingering effects from crops grown the previous year, L. hesperus populations may move into cotton early in the growing season; this could be particularly damaging given research suggesting that cotton yield is particularly sensitive to L. hesperus densities early in the growing season. Using our data, we were not able to determine at exactly what scale the effects of rotation on L. hesperus likely operate. We do not believe a within-field scale is plausible, but determining a more precise spatial scale for these effects could be an interesting topic for future research. Our findings match expectations of crop yield effects based on previous research on L. hesperus host crop preferences, lending support to our hypothesis that yield effects of crop rotational histories are, at least partially, mediated by effects on L. hesperus. Alfalfa and sugarbeets, both crops for which we found negative effects on yield and positive effects on L. hesperus when grown in a field the previous growing season, are all considered preferred hosts for L. hesperus , and have been shown to also increase L. hesperus populations in nearby cotton fields during an individual growing season. Presumably,draining pots this effect is due to these crops supporting large L. hesperus populations. Large L. hesperus populations are known to build up in alfalfa, and their dispersal following alfalfa harvesting can threaten nearby cotton crops. L. hesperus is also known to emigrate to nearby cotton fields when safflower begins to dry in mid-summer. While the potential for nearby alfalfa and safflower fields to increase L. hesperus populations in cotton fields in a given year has been recognized, our results are the first indication that these landscape effects may extend temporally, affecting L. hesperus populations, and yield, in the next growing season. Tomatoes, associated with increased yield and decreased pest abundance in our data, have likewise been shown to decrease L. hesperus abundances in nearby cotton fields within a given year. While previous experimental work has examined the effects of crop rotations on cotton yield, our work expands on these studies in several ways. First, we explore a much wider diversity of possible crop rotational histories, providing quantitative estimates of the cotton yield effects of cultivating 14 different crops the previous year. Second, since we analyze records from commercial cotton fields, our data have the potential to capture yield effects that could only be detected at this realistic spatial scale. Third, since we have collected data on pest abundances, not only yield, we have also been able to use our data to generate and build evidence for a hypothesized mechanistic explanation of the yield effects we identify. We also found that farmers incurred a decline in cotton yield of about 2.4% for every additional year cotton was grown consecutively in a field preceding the focal season . This is consistent with previous research suggesting that continuous cultivation of cotton in the same location can reduce yield compared to interspersing cotton with other crops.

We also found some evidence that the number of years cotton was grown consecutively in a field was associated with higher June L. hesperus densities: the posterior probability of there being a positive association was about 95%. Identifying the actual mechanism underlying this yield effect is beyond the scope of this study, but would be an interesting avenue for future research. It is possible that the yield decline is not caused by changes in L. hesperus densities, and instead results from the buildup of soil pathogens, especially in light of previous research showing that continuous cotton cultivation increases the densities of fungal pathogens in the soil When interpreting our results, it is important to remain cognizant of the challenges of drawing causal inferences from observational data. The key assumption required to make causal inferences from regression coefficients is that all variables that affect both the treatment assignment and the response variable are included in the model; this ensures that the probability of receiving each treatment becomes, conditional on the predictor variables included in the model, conditionally independent of the response variable. In experimental studies, the treatment assignment is typically controlled by the experimenter, so one can be confident that the only difference between treatment and control groups is in fact the treatment. However, in observational studies, it is impossible to prove definitively that there was no other factor that affected both the treatment assignment and the response variable . As such, we want to be very clear that our hypothesis that the effects of rotation on yield are mediated by effects on L. hesperus densities is exactly that – a hypothesis. While our data do support a negative association between effects on L. hesperus and effects on yield, we cannot prove with observational data that the varying effects on yield are caused by the varying effects on L. hesperus. This could be a fruitful topic for future experimental work. Although causality is impossible to prove using observational data, ecoinformatics paves the way for implementing data-driven agricultural strategies and allows us to mine large datasets to explore important questions that are difficult to address experimentally. While by no means a replacement for experimentation, ecoinformatics can be a cost-effective and realistic complementary approach. In particular, our result identifying the effects of crop rotation on L. hesperus density would have been extremely difficult to reach experimentally. Since L. hesperus readily disperse across spatial scales of more than 1000 meters, an experimental study would have required massive plots comparable to the size of commercial fields in order to adequately capture their spatial dynamics. Growers with knowledge of the crop rotations associated with depressed cotton yield could make more informed decisions, selecting the sequence of crop cultivations that lead to maximized yield. When feasible, cotton plantings could be avoided following crops that decrease cotton yield, and instead limited to fields where crops that increase cotton yield were previously planted. In some cases, market conditions may lead a grower to plant cotton following a yield-depressing crop, even given the knowledge of likely yield loss. In those situations, our results may still be helpful, as an early warning sign of a potential pest problem in a particular field could allow the grower and PCA to focus pest detection efforts on that field and provide time to eliminate the problem before severe yield loss was incurred.

Several other major surface water projects serve California’s cities and agricultural regions

Although there are extensive resources in the state, most urban population reside in the water-scare coastal and southern region and most agricultural activities are in semi-arid lands. To accommodate the growth in population, California and the federal government, built a complex and expansive network of dams, aqueducts, and pumping facilities to harness California’s water supplies and deliver them to its cities and agricultural areas . Today, California’s water resources support over 38.3 million people , a $2.2 trillion economy , and the largest agricultural sector in the country . California’s rivers, streams, lakes, and estuaries are also home to a vast array of aquatic species and habitats, and support substantial aquatic recreation. The state’s water system has a total storage capacity of about 43 million acre-feet and includes hundreds of miles of aqueducts to deliver supplies to places of need and hundreds of thousands of wells to tap the state’s vast groundwater resources . The system is comprised of federal, state and local projects and it’s operated by federal, state, regional, and local organizations as shown in Figure 2-1. The Central Valley Project was authorized in 1935 by the federal government to increase the Central Valley’s resilience to drought and protect it from flooding. Shasta Dam was the first dam to be built as part of the CVP and was initiated in 1938. In 1979, the last dam, New Melones, was completed. The CVP system includes 18 other dams and reservoirs, 11 power plants, and 500 miles of conveyance and related facilities .

The CVP has facilities on the Trinity, Sacramento, American, Stanislaus, and San Joaquin Rivers,blueberry plant size and it serves over 250 long-term water contractors in the Central Valley, Santa Clara Valley, and the San Francisco Bay Area . The total annual contract exceeds 9 MAF . Historically, 90% of CVP deliveries serve agricultural users. In 2000, the CVP and other smaller federal projects delivered about 7.5 MAF to users. About 35% went to the Sacramento River region, 31% went to the Tulare Lake region, and 24% went to the San Joaquin region. Smaller shares went to the North Coast, San Francisco and Central Coast regions . Agricultural users served by the CVP will likely experience additional price increases . CVP contractors are currently behind on repaying the project costs. Under the original contracts, which were negotiated and signed in the late 1940s, the project was to be paid off 50 years after its construction . By 2002, however, irrigators had repaid only 11 percent of the project cost . Based on an analysis of 120 CVP irrigation contracts and a review of full cost rates, which include cost of service and interest on unpaid capital costs since1982 , water contractors will need to pay on average an additional 196 percent to be brought up to full cost rates. Combining the estimated price increases for CVP contractors with rising cost of service rates for the remainder of agricultural water users, Gleick et al 2005 projected that overall agricultural water price will increase by 68 percent statewide between 2000 and 2030. The State Water Project was the first stage of an ambitious strategy outlined in 1957 State Water Plan to improve the reliability and capacity of water delivery throughout California. The SWP captures large amounts of water behind 28 different dams in the Western Sierra Nevada. The Oroville Dam, the largest in the system with a capacity of 3.5 MAF, began construction in 1961 and was completed in 1967.

The dams control the flow of water through the Sacramento River system, in order to maximize the amount of fresh water that can be pumped out of the Bay-Delta into the California Aqueduct. The California Aqueduct then transports the supply south through the San Joaquin Valley to Southern California and the Central Coast. The transport of water is facilitated by 26 pumping and generating plants and about 660 miles of aqueducts. The last major component of the system – the Coastal Branch, which delivers supply to Santa Barbara and San Luis Obispo counties, was completed in 1997. Prior to the commencement of construction of the SWP, contracts were signed between the DWR, the managers of SWP, and urban and agricultural water districts. Since the signing of the contracts in the 1960s, the capabilities of the system have not fully developed, and the SWP regularly does not meet all of its obligations. In 1998, existing long-term SWP water supply contracts totaled about 4.1 MAF , and these contracts are scheduled to increase to about 4.2 MAF by 2020 . In the year 2000 , however, the SWP delivered only 2.9 MAF of Table A water . DWR’s State of Water Project Delivery Reliability Report confirms that without additional facilities, the SWP will consistently be unable to meet its obligations to Table-A contractors. The Department of Water Resources administers long-term water supply contracts to 29 local water agencies for water service from the State Water Project. These water supply contracts are central to the SWP construction and operation. In return for State financing, construction, operation, and maintenance of Project facilities, the agencies contractually agree to repay all associated SWP capital and operating costs. To provide a convenient reference, SWP Analysis Office has prepared consolidated contracts for several contracting agencies.

These contracts contain the amendments integrated into the language of the original contract. Listed below, under the names of the contracting agencies, are the consolidated contracts and original contracts. DWR plans to add more consolidated long-term water supply contracts as they are completed. The 29 State Water Project contractors are shown in Table 2-1.The Bay-Delta ecosystem is a major hub of the state’s water re-distribution system. In order for the large freshwater of the Sacramento River and its tributaries to be made available to users in the southern half of the state, they must flow from north through the Sacramento-San Joaquin Delta and then be pumped out of the Delta in South into the aqueducts of the State Water Project. An extensive system of levees has also developed over the years to protect agricultural and urban land holdings within the delta from water intrusion and flooding. Together, the pumping of freshwater from the south to the delta and the artificial support of the Delta’s numerous islands has dramatically altered the natural hydrology and ecosystem function of the Bay-Delta system. In response to dramatic declines in Delta ecosystem quality during the 1987-1992 drought, a Federal and State partnership was established in 1994. The purpose of the multibillion dollar restoration and management effort, now managed by the California Bay-Delta Authority is to restore ecosystems within the Delta, improve the quality and reliability of water supplies from the Delta, and stabilize the Delta’s levee systems . The challenge of this mandate is immense, particularly when considered along with potential climate change . The incongruent nature of the program’s objectives has arguably hampered its effectiveness to date,plant raspberry in container yet the effort continues and will remain a significant consideration in future California water management and planning. Prior to extensive human development, the San Francisco Bay-Delta was largely marsh, river channels, and islands, bounded in the west by the Golden Gate Strait and Pacific Ocean and in the East by the confluence of the Sacramento and San Joaquin Rivers which drain the Sierra Nevada Mountains to the Pacific Ocean. The Bay-Delta in its natural state was an enormous estuary and supported extensive habitat for fish, birds, and other terrestrial animals. Water flowing through the Delta is the main source of supply for two major California water delivery projects, the SWP and the Federal CVP. From these projects, a majority of Californian relies on water flowing through the Delta for all or part of their drinking water. In addition, approximately one third of the state’s cropland uses water flowing through the Delta . Figure 2-2 shows the Bay Delta water distribution during a typical hydrological year. The Colorado River supplies Southern California with more than 4 MAF a year of water via the Colorado River Aqueduct and the Coachella and All American Canals.

The Colorado River Compact, signed six states bordering the Colorado River in 1922, established California’s base water entitlement to be 4.4 MAF a year. In recent years, however, California has relied upon the unused allocation of upstream states, importing more than 0.8 MAF a year of additional supply some years . Due to growing water use by other states, California was forced to reach an agreement to gradually eliminate its use of surplus water. The Colorado River Quantification Settlement Agreement resolves much of the uncertainty over Colorado River allocations, but an on-going drought in the Colorado River basin still threatens future Colorado River water availability to California. The iconic Colorado River supplies water to millions of people in fast-growing cities in Colorado River’s watershed, such as Las Vegas, Mexicali, Phoenix, and St. George, Utah. Tens of millions of people outside the watershed, from Denver to Albuquerque and from Salt Lake City to Los Angeles, San Diego and Tijuana, also receive water exported from the basin to meet at least some of their residential and commercial water needs. More than half of the people receiving water from the basin live in Southern California . Figure 2-3 shows historical water supply and usage for the Colorado River Basin from 1914 to 2007. Local cities in California have also taken initiative to develop water supplies. The cities of Los Angeles, San Francisco, and several in the East Bay region have all financed and constructed infrastructure to capture, store, and transport water from sources far away from the municipalities. Specifically, the Los Angeles Aqueduct transports water over 200 miles from the Owens Valley to the Los Angeles area; the O’Shaughnessy Dam captures and stores water in the Hetch Hetchy Valley for delivery to San Francisco and surrounding cities; and the PardeeReservoir and Mokelumne Aqueducts supply the East Bay Municipal Water District service area with supplies from the western slopes of the Sierra Nevada . Groundwater is a major source of water for California’s agricultural industry and municipalities. During an average year a third of the state’s water supply comes from groundwater. Some regions are entirely dependent on groundwater, and 40-50% of Californians use some amount of groundwater . Much of the state’s groundwater resources have been developed locally by individual landowners or municipalities. Such decentralized management has led to unsustainable groundwater use in California. Estimates by DWR in 1980 suggest that use of groundwater exceeds recharge by between 1 and 2 MAF per year . Such overuse has led and will continue to lead to many serious problems including land subsidence, sea water intrusion, and degradation of groundwater quality. Groundwater is currently managed through local water agencies, local groundwater management ordinances, and court adjudication. Importantly, state and regional planning agencies have little influence or control over the management of groundwater, making it difficult to implement integrated surface and groundwater management plans. The total groundwater storage in California is estimated to be about 1.3 billion acre-feet and about 140 MAF of precipitation percolates into the state’s aquifer annually . These estimates however, do not characterize the potential water supply for the region – many other factors limit the development potential of an aquifer . Most of the state’s groundwater is located in the aquifers beneath the Central Valley, although Southern California also has considerable amount of groundwater. Groundwater is a major contributor to the state’s water supply and even more so in dry years. As shown in Figure 2-4, groundwater supplies on average 30 percent of California’s overall demand and up to 40 percent in dry years . In some areas where surface water supplies are not accessible or economically feasible, groundwater provides 100 percent of a community’s public water . During years where surface water deliveries are not available, groundwater may also provide up to100 percent of irrigation water for certain areas. About 43 percent of Californians obtain at least some of their drinking water from groundwater sources. Local municipalities and regional water agencies are increasingly turning to alternative sources of water supply. Treated urban wastewater is becoming an important source of water for agriculture, industry, landscaping, and some non-potable uses in commercial and institutional buildings. In many regions it is discharged into rivers and streams and thus used by downstream users. In some regions it is also blended with conventional sources and is injected or allowed to percolate into groundwater basins.

Net importers of a good tended to protect domestic producers by increased tariffs

Despite the emergence of tariffs throughout the world before World War I, the degree of agricultural protection in European countries was in the modest range of 20 percent to 30 percent. These tariff duties did not prevent the expansion of agricultural trade. At the end of World War I, a substantial international division of labor continued in the production of agricultural goods. The second wave of expansion of government intervention in agriculture took place during the economic crisis of the 1920s and 1930s. The pattern of a particular nation’s response to the crises followed lines associated with its net position in international trade.For example, in France,Germany, and Italy rates of protection on foodstuffs more than doubled between 1927 and 1931. Even Britain converted to protectionism in 1931, although the free entry of produce from its empire meant that tariff protection was of limited importance to domestic agriculture. In more recent U. S. history, the recession of the early 1980s, the associated high real rates of interest, high exchange values of the dollar, and slow world economic growth put enormous pressure on agriculture. The macroeconomic environment combined with intervention designed in the 1981 Farm Bill to create embarrassing surpluses and unacceptable levels of resource misallocation. The 1981 U. S. Farm Bill set support and target prices at levels designed for a strong export and price performance in the grain sectors. Due to macroeconomic conditions, however, this scenario failed to materialize. More importantly, the 1981 Bill did not allow for flexibility and, as a result,blackberry container programs sustained high production which led to accUmulations of government owned stocks and agricultural expenditures of crisis proportions.

This mess can be referred to as a ‘~olicy disequilibrium.” Response to this specific policy disequilibrium was the Payment-In-Kind Program of 1983. PIK led to even greater expenditures and failed to alleviate the serious problem of surplus stocks. The path followed by agricultural commodity markets over much of the last two decades closely resembles other markets for freely traded commodities such as gold, silver, platinum, copper, and lumber. Stocks also accumulated for these commodities during the 1970s and early 1980s, suggesting that sectoral conditions and government policies are only a part of the explanation for the behavior of agricultural commodity markets. The search for a complete explanation leads to a multi-market perspective and an investigation of external linkages with other markets. Since 1972, conventional wisdom has placed increasingly less emphasis on the inherent instability in commodity markets and more emphasis on external linkages with other markets. Deregulated credit and banking has resulted in greater exposure of agriculture to conditions in the domestic money markets. Also, the shift from fixed exchange rates to flexible rates, in much of the Western world, exposed commodity markets to international money and real trade flows. The emergence of well-integrated, international capital markets meant that agriculture, through domestic money and exchange rate markets, became more dependent on capital flows among countries. The linkages between commodity and money markets are indeed pervasive. In the United States, farming is extremely capital intensive and debt-to-asset ratios have risen dramatically over the last 10 years. As a result, movements in real interest rates have a significant effect on the cost structure facing agricultural production. Storage and breeding stocks especially are sensitive to interest rates. On the other hand, the influence of interest rates on the value of the dollar affects the demand side for farm goods. The close connection between agriculture’s health and interest rates suggests that this sector is vulnerable to monetary and fiscal policy changes.

It has been argued, with much justification particularly since 1980, that the instability in monetary and fiscal policy has contributed greatly to the instability of agriculture markets.Unstable macroeconomic policies are thought to impose sizable shocks on commodity markets. This would be especially true if agricultural commodity markets have flexible prices while other markets have stickier prices. And, indeed, without governmental price supports, agricultural prices are generally more flexible than non-agricultural prices. This is true in part because contracts for agricultural commodities tend to be written for shorter duration and because biological lags tend to cause agricultural supply to be unresponsive to price changes in the short run. This fixed/flex price model of markets is necessary, but not a sufficient condition, for money non-neutrality to imply overshooting agricultural prices . Overshooting in this context is defined as a price path which exceeds the new eventual price equilibrium. Flex-price commodity markets and fixed-price non-agricultural output markets combined with “small” output responses mean that overshooting in agricultural sector markets will occur even if expectations are formed rationally. Such overshooting results from the spillover effects of monetary and fiscal policy on commodity markets. Given a world of fixed- and flex-price markets, the driving force behind overshooting is the real rate of interest and the ability to arbitrage across markets. When in the short run real interest rates rise above long-run equilibrium rates, immediate pressure arises to drive flexible commodity prices downward . In much of the 1970s, real interest rates were below their long-run equilibrium levels and, for some periods in the 1980s, real interest rates were above. Overshooting combined with ‘myopic” expectations means that ”macro externalities” will be imposed upon the agricultural sector .

In the case of interest rates facing U. S. agriculture, interest rate disequilibrium was even more pronounced due primarily to the relative importance of the Farm Credit System. The System’s organizational structure amplifies the disequilibrium and generates more overshooting than would otherwise result. Within the Farm Credit System, borrowers are, in fact, owners and no dividends are paid to stockholders. As a result, during favorable economic periods, the only way owners might extract benefits generated by the system is by increasing borrowing levels at interest rates below those for the rest of the economy. Indeed, through much of the 1970s, interest rates to farmers were dramatically below general market rates while, during the 1980s, the opposite result was true.Empirical evidence supports the view that agricultural output responses are not sufficiently flexible to counter the tendency for prices to overshoot, and that expectations are, at best, only ”myopicany” rational. Bordo has shown empirically that prices of raw goods respo~d more quickly to changes in money supply than do prices of manufactured goods. Andrews and Rausser have shown that, during the large cyclical downturns of the early 1930s and the early 1980s, prices fell more and quantities less in the agricultural sector than in any of nine other sectors of the U. S. economy. In the case of interest rates, numerous studies have shown that real rates vary significantly across countries,planting blueberries in a pot refuting the old view that they remain constant. These results also suggest that the purchasing power parity assumption does not hold, even approximately. In other words, exchange rate changes do not offset changes in relative price levels across nations. Frankel and Hardouvelis’ study on monetary surprises rejects the flex/flex specification in favor of the fixed/flex specification. Their empirical results show that, when announced money supply turns out to be greater than the public expected, nominal interest rates tend to rise and the prices of basic commodities tend to fall. If the flex/flex specification were correct, then interest rates and commodity markets would either both rise or both fall . The only hypothesis that explains the reactions in both interest rate and commodity markets is that increases in nominal interest rates are also increases in real rates. The public anticipates that the Federal Reserve will reverse any recent fluctuation in money stock, thus increasing interest rates and depressing the real prices of commodities. The aggregate effects of money supply on raw agricultural product prices, retail prices of food products, and the nonfood Consumer Price Index also support empirically the idea of overshooting. Consistent with money nonneutrality and raw agricultural prices being generated by flex-price markets, Stamoulis et al. found the money supply to be a more important determinant in explaining raw product prices than in explaining the nonfood CPI or the index of retail food prices.The linkages discussed above run from the macroeconomic sector to the agricultural food sector. These causal influences may be defined as forward linkages. The most important forward linkages include those observed in the cost structure of production , in general economic conditions and food demand, in inventory behavior and the demand for storage,and in animal breeding stocks.

The macroeconomic variables included in these linkages are interest ratest personal income t and nonfood and general inflation rates. There are effects that run from agriculture to the general economy. These linkages may be defined as backward linkages. There are three main influences on macroeconomy reflected backward from agriculture: on the general inflation rate t on governmental deficits or surplusest and on the balance of trade. These three components cant in turn t have dramatic effects on employment real interest rates, investment t economic growth t and so on. Food prices are a major component of any general price index, and this linkage is important everywhere that the general rate of inflation influences macroeconomic conditions. This is true not only in the demand for money balances, and the willingness of individuals to hold productive and speculative assetst but also in the determination of real wagest real income t and the demand for exports. The linkage through government deficit arises because the outcome for prices, production, private storage, and other variables endogenous to agriculture t determine in part the level of federal spending. As government deficits and expenditures rise, there is a positive effect on consumption and investment. Over the short runt there are multiplier effects leading to further increases in economic activity and in tax revenues, which are a positive function of economic growth. InterestinglYt the operation of government storage and deficiency payments are examples of expenditures that are endogenously determined. This feature is in contrast to much of the non-farm components of the federal budget that are fixed in dollar terms.In addition to these more direct forward and backward linkages within the domestic economy, there are important inter dependencies between the monetary policies of different countries. These also represent indirect linkages between a domestic macroeconomy and agriculture. Monetary linkages between nations have important implications for exchange rates and worldwide recessions. For example, as U. S. monetary policy change? responses in the rest of the world affect to some degree foreign economies, exchange rates, and prices which, in turn, translate into shifts in the export demand facing domestic farmers. Under fixed exchange rate regimes, such as the monetary system set up by the Bretton Woods agreement, central banks are compelled to intervene to maintain a fixed value of their domestic currency vis-a-vis foreign currencies. With flexible rates, no such intervention is necessary. fuile monetary authorities may still intervene from time to time in foreign exchange markets, such actions have become discretionary. Under fixed exchange rates, expansionary monetary policies in one country cause similar expansionary policies in others as they observe their currencies appreciating. The country beginning the expansionary process is said to have “exported” its inflation. When exchange rates are flexible, no obligation exists to maintain exchange rates by domestic inflation. Only if nations keep rates within a certain range in a ”managed float” can inflation be exported. MCKinnon and others have emphasized in recent years that the argument for monetary independence between nations under flexible exchange rates involves an untested assumption about the portfolios of money holders. A monetarily independent country must be an “insular” economy, at least as far as money demand is concerned. Money holders must not substitute for foreign currency holdings’ when the domestic currency becomes less desirable, nor vice versa. If this is not true, currency substitution implies that the effects of domestic monetary policy are exported even under perfectly flexible rates. This exporting of monetary policy and the resulting loss of independence can occur in two ways. First, when there is substitution between currencies, money growth rates are conditional on expected money’ growth abroad. For example, suppose the United States engages in some unanticipated monetary policy, say, expansion. There will be an increase in the demand for the foreign currency, if domestic expansionary policies are expected to depreciate the value of the dollar.

Economic welfare is the sum of producer and consumer surplus in the agricultural sector

There are several methodologies developed in the last few years that can provide more accurate estimates of GHG emissions in California . These methods incorporate the impact of diet, accounting for, as an example, the fact that fiber content is positively associated with methane emissions while lipid content is negatively correlated. About half of California’s livestock GHG emissions comes from enteric fermentation and half from manure in concentrated beef cattle and dairy operations. The largest opportunities for changes in livestock practices center on feed and manure management. California offers a uniquely diverse range of crop byproducts for use as dairy cow feeds, and research has improved our understanding of the impacts of different feeds on productivity, economics and GHG emissions . For example, grape pomace, a byproduct of the wine industry, has been shown to reduce methane emissions when fed to dairy cattle in pelleted form without reducing milk production . A shift towards solid manure management practices may result in reduced GHG emissions by reducing the anaerobic digestion that occurs when water is used to flush manure into storage lagoons. However, Owen and Silver indicated solid manure management can produce substantial GHG emissions; thus, minimizing manure storage time is important to mitigating emissions. One caution: there is a risk that focusing on one climate pollutant, such as methane,large plastic pots for plants could lead to practices that have negative trade-offs, such as increased N2O emissions , and nutrient loading in soil and water .

A recent report submitted to the California Air Resources Board suggests it may be technically feasible for California to achieve a 50% reduction in methane emissions from dairy manure management by 2030 if supportive policies are created . This would require capturing or avoiding methane generated from manure storage on dairies from an estimated 60% of dairy cows in California, particularly the largest dairy operations where cost-benefit considerations are most favorable . If successful, a gallon of California milk may be the least GHG intensive in the world. The report outlines several alternative manure management practices and technologies. A diversity of practices is needed to reflect the range of dairy sizes and layouts in California. For example, lagoon storage systems, which can emit large amounts of methane, lend themselves to the use of covers or engineered anaerobic digestion systems for bio-methane collection. Potential trade-offs of these practices with respect to air quality, crop management, nutrient use efficiency and cost, however, require further analysis. Pasture systems are used in coastal areas where farms have less crop land available than in the Central Valley; pasture requires significantly more land and water for feed production compared to current dairy systems that rely on corn silage, grass silage and alfalfa . Comprising more than two-thirds of California’s agricultural acreage , these working lands provide ecosystem services in addition to supporting production of livestock. Grasslands have higher levels of total soil carbon compared to cultivated lands , and similar amounts to California forests. There are numerous options for increasing carbon storage in rangelands. Modeling analyses project that restoration of native oaks could increase carbon storage in wood biomass and litter . In a study of riparian revegetation in Marin, Sonoma and Napa counties, modeled soil carbon sequestration rates averaged 0.8 tons C per acre per year, while modeled results of restored woody riparian areas demonstrated ecosystem carbon storage potential of 16.4 tons C per acre per year over a 45-year period . Cultivation and re-seeding to restore native perennial grasses also shows promise.

Native grasses may sequester carbon in slightly deeper soil levels due to perennial root systems . Rangelands with native grasses and oaks have lower soil carbon losses and higher nitrogen cycling rates . Approaches to verifying carbon sequestration on rangelands requires a long-term approach. Soil carbon can take decades to build to a measurable level: rangelands rarely receive intensive management and these systems are much more exposed than irrigated agriculture to annual variations in moisture. On average, California’s grasslands lose carbon, but the net C gain or loss depends on precipitation, with net losses of carbon in years when the timing of precipitation causes a short growing season, and gains when the timing of rains lead to a longer growing season . The use of composted materials in rangelands may reduce N2O emissions in comparison to those materials entering waste streams and being subject to the standard manure and green waste management practices . One study on California’s coastal and valley grasslands showed that use of compost above standard application rates could boost net ecosystem carbon by 25% to 70%, sequestering carbon at a rate of 0.2063 tons C to 0.2104 tons C per acre over the 3-year study or a rate of 0.0688 tons C to 0.0701 tons C per acre per year, largely by decreasing the amount of C that is being lost from these grasslands . Researchers using the DAYCENT model to look at different compost amendments and project over longer time frames found that the net climate mitigation potential ranges from 0.5261 to 0.6394 tons CO2 equivalent per acre per year in the first 10 years , and declines by approximately half of that by year 30. Applying organic materials to rangelands in Southern California demonstrated co-benefits: stabilizing soil nitrogen stocks, improved plant community resilience and productivity, and increased soil organic matter after 1 year of application . However, due to the very limited number of studies and the need to demonstrate sustained carbon sequestration, long-term studies that span California rangelands are needed to validate these results and provide long term policy recommendations. Climatic variation across the state may enhance or diminish observable carbon sequestration benefits.

Further, it will be important to ensure that rangeland compost application practices do not lead to undesired plant species shifts and do not create negative trade-offs for water quality through nutrient run-off or leaching; it will also be important to track emissions associated with fossil fuel use for transportation and distribution of compost across rangeland sites. Additional practices that have shown benefit elsewhere and should be examined in California include planting of legumes, fertilization, irrigation and grazing management. In particular, grazing management may significantly impact rangeland carbon sequestration. While heavy grazing that leads to erosion can degrade carbon storage, there is conflicting evidence in California and elsewhere on specific grazing practices that can benefit soil carbon . Most studies in California that have assessed the effects of grazing on soil carbon compared only grazed versus ungrazed , without assessing the effects of grazing duration, intensity, frequency and rest periods. The USDA Natural Resources Conservation Service provides cost-share programs for range managers to split the cost of implementing improved management techniques. Currently, only 30% to 40% of California ranchers participate in these programs . The research above points to the magnitude of opportunity from alternative rangeland practices and the need to identify socioeconomic opportunities and barriers to greater participation in range management incentive programs.The most recent assessment of biomass in California details the availability of resources, including agricultural biomass, among others,plant pots with drainage that could support generation of three to four times the current biomass-based renewable energy being produced, depending on policies and regulations affecting biomass use . Biomass use for energy, however, has declined in recent years, as it is generally more expensive than alternative fuels. In addition, interconnection issues between biomass facilities, such as anaerobic digesters, and utilities complicate and increase the cost of new facilities. Research and policy actions to reduce barriers and incentivize co-benefits from the use of biomass for power and fuel will be required to expand this sector sustainably. Current biomass energy production from agricultural residues in California is largely based on combustion of nut shells and woody biomass from orchards and vineyards. While one grower has installed a successful on-farm small-scale gasification systems for nut shells and wood chips, larger scale facilities that convert woody biomass to electricity are typically more than 40 years old, and the power produced is more expensive than other forms of alternative energy. Many plants are now idle or closed, leaving tree and vine producers with few or more expensive options for disposal of biomass. Other underutilized agricultural biomass includes rice straw and livestock manures suitable for anaerobic digestion technology . Manure alone is not a high biogas-yielding feed stock.

Supplementing manure with fermentable feed stocks such as crop or food processing residues can improve the energy and economic return from anaerobic digesters , but this practice currently faces regulatory and practical obstacles, like managing an additional source of organic materials and additional nutrients and salts. Nonetheless, there is limited, but real potential for some crop-based bio-fuels and bio-energy in California based on locally optimal feed stocks and bio-refineries .Twenty-five years after the publication of the first IPCC Assessment Report, it is instructive to step back and ask what we have learned about the economic impacts of climate change to the agricultural sector, not just from a technical standpoint, but from a conceptual one. California is an ideal focus for such an analysis both because of its strong agricultural sector and proactive climate policy. After passing the 2006 Global Warming Solutions Act, the state has sponsored research to complete three climate change assessments, with the fourth assessment report in progress at the time of submitting this paper. This effort to study adaptation appears to be relatively more prolific than in many other global sub-regions, particularly over the past decade . Assessing adaptation potential — the institutional, technological, and management instruments for adjusting to actual or expected climatic change and its effects — represents an important turning point in the climate impacts literature. The important role of responsive decision-making by farmers and institutions is recognized for the first time as the key ingredient to dampening the effects of climate change . Adaptation was simply mentioned as an optimistic afterthought in earlier studies, which suggested that agriculture would fully or mostly adjust in the long term — although there was sparse detail on how it would do so . When adaptation was directly included in the modeling framework, economists found that the estimated welfare damages from climate change documented in previous studies declined . In colloquial terms, this is a shift from modeling the “dumb” farmer to modeling one with reasonable economic agency. There are four key concepts linked to the idea of adaptation: vulnerability, adaptive capacity, economic welfare, and economic efficiency. In the IPCC literature, adaptation is connected to the foundational concept of vulnerability, defined as the propensity for agricultural systems to be affected by future climatic changes . Vulnerability can also be defined endogenously as the ability of farmers and institutions to respond and adapt to, and recover from such changes . This latter definition is synonymous with the concept of adaptive capacity, or the ability of a system to moderate potential damages and take advantage of adaptation and mitigation opportunities to reduce vulnerability of the system to climatic changes .Adaptation dampens welfare losses caused by climate change. The relationship of adaptation with vulnerability is more complex, and better represented as that of trade-offs. For example, changing the crop mix in favor of high value crops may reduce vulnerability to water scarcity, but it may increase vulnerability to heat tolerance. Finally, the concept of efficient adaptation has been defined as a situation where the costs of effort to reduce climate-induced damages is less than the resulting benefits from adapting . Given the central role of farmer and institutional responsiveness, how do recent agro-economic assessments suggest that specific adaptations may improve economic welfare and reduce vulnerability? What is economically efficient adaptation in the short and long-run? What are the limits to the agricultural sector’s adaptive capacity? This is certainly not the first review of climate impact assessments to California agriculture. Smith and Mendelsohn highlighted the importance of regional climatic impacts to several economic sectors in California , integrating across range of modeling approaches . The agricultural impacts are calculated by the Statewide Agricultural Production model under wet and dry scenarios. The results echo those of more recent SWAP studies, suggesting that field crop usage will decline by the end of the century under a dry scenario, though the decline in revenues will be partially offset by increased production of high-value crops.

The TGA arm runs a genetic algorithm over the RBF model to predict the best designs

A subset selection strategy was unable to consistently improve on the regular NNGA-DYCORS performance by focusing the coordinate search on the most sensitive sets of parameters. This may be because the RBF does not adequately model a given test function, so it does not correctly identify the most important parameters in the database, or the coordinate search method does not properly exploit the narrowed parameter space. Generically, it may be useful to reduce the dimensionality of the parameter space, but the strategy of doing so using model adherence ‘drop-out’ experiments was not uniformly successful. This article demonstrates that the NNGA-DYCORS hybrid learning algorithm outperforms its constituent algorithms in the important criteria of robustness and generalizability to different kinds of problems. Thus, this algorithm can be applied to a wide variety of physical and biological design optimization problems with a degree of assurance that parameter estimates will be optimal while minimizing necessary resources. In addition, as this hybrid is both robust and highly generalizable to many types of design problems, it should be useful for practitioners who are not experts in surrogate optimization methods, and work on a variety of problems of diverse complexity. Optimizing media for biological processes, such as those used in tissue engineering and cultivated meat production, is difficult due to the extensive experimentation required, number of media components,grow raspberries in pots nonlinear and interactive responses, and the number of conflicting design objectives.

Here we demonstrate the capacity of a nonlinear design- of-experiments method to predict optimal media conditions in fewer experiments than a traditional DOE. The approach is based on a hybridization of a coordinate search for local optimization with dynamically adjusted search spaces and a global search method utilizing a truncated genetic algorithm using radial basis functions to store and model prior knowledge. Using this method, we were able to reduce the cost of muscle cell proliferation media while maintaining cell growth 48 h after seeding using 30 common components of typical commercial growth medium in fewer experiments than a traditional DOE . While we clearly demonstrated that the experimental optimization algorithm significantly outperforms conventional DOE, due to the choice of a 48 h growth assay weighted by medium cost as an objective function, these findings were limited to performance at a single passage, and did not generalize to growth over multiple passages. This underscores the importance of choosing objective functions that align well with process goals. Cell culture media is a critical component of bio-processes such as pharmaceutical manufacturing and the emerging field of cultivated meat products. Optimizing culture media is a difficult task due to the extensive experiments required, number of media components, nonlinear and interactive responses from each component, and conflicting design objectives. Additionally, for cultured meat products, media needs to be less expensive than those currently deployed for other cell culture processes , food-grade, consider safety, component stability, and effects on sensory characteristics of final products. Without much in the way of first principles models for these objectives, especially for adherent mammalian muscle cells used for cultivated meat production , media optimization must be done experimentally with constraints on inputs, outputs, and number of experiments.

Optimizing one factor at a time or with random experiments is still the most common way of exploring design space. This strategy is very inefficient for large systems and is unable to consider interactions among media components. Design-of-Experiments methods are better able to manage large numbers of components in fewer experiments using Factorial, Fractional Factorial, Plackett-Burman, and Central Composite Designs where linear and polynomial models can correlate first order and interactive effects of media components. In general, DOE methods are able to optimize < 10 variables and with the help of screening designs can solve problems > 25 variables , though at the expense of ignoring interactions, screened variables, and easily costing > 100 experiments . Experimental optimization of media has also been done using stochastic methods such as genetic algorithms and this approach is generally suited to optimizing systems of dimensionality > 15 where DOE methods can become experimentally cumbersome, but also take 200 experiments. Because the size of the design space increases exponentially with the number of design variables, a natural advance was to use response surface models to capture information about interactions and nonlinearity. These techniques can then be used to sequentially identify optimal culture conditions while simultaneously improving modeling accuracy. Oftentimes experimenters will employ polynomial models to find optimal culture conditions but only after extensive DOE to reduce the dimensionality of the problem space to < 5. More advanced modeling techniques are neural networks, decision trees and Gaussian processes which are often better at generalizing noisy, nonlinear, and multi-modal data. When combined with global optimization methods.

Zhang and Block demonstrated that these response surface methods can optimize problems with > 20 variables in less than half the number of experiments as traditional DOE. In the previous chapter, this author further improved the robustness of this algorithm by using a hybrid optimization scheme validated on simulated design problems . Here we employ this novel nonlinear experimental design algorithm to optimize the proliferation of C2C12 cells while simultaneously reducing media cost by modeling the response surface of culture conditions using an RBF with a hybridized global/local optimization scheme. We then compare this approach to a more traditional DOE method. The organization of this article is as follows: Section 3.2 includes an outline of the experimental and computational methods use in media optimization, Section 3.3 goes over the results and Section 3.4 details a discussion of the results and current challenges.Using the trained RBF model, the two arms of our algorithm, TGA and DYCORS, each suggest five experimental conditions for a total of 10 experiments per batch within the design space [×1/2, ×2] of the GM that optimize α. Because the model is based on a small amount of noisy data, the genetic algorithm is stopped before it can converge to implicitly consider model and experimental uncertainty. The DYCORS arm of the algorithm searches in the region around the best design and picks the best predicted set of designs in that region,plant pot with drainage which expands and contracts based on the quality of previous experiments. The new experiments are conducted and the resulting data is used to correct and retrain the RBF model. To allow the RBF model to generalize better during early periods of optimization, 30 randomly selected experimental conditions were taken initially. The optimization loop was stopped when the α quality of the media showed a lack of improvement. The general framework for the HND is shown in Figure 3.1. As a control method, a traditional DOE was used to optimize the same media design problem in three steps. 

A ’Leave-One-Out’ experiment was conducted where a media composed of all components at their GM concentrations, excluding each individual component,were tested for their proliferation capacity using the %AB metric , similar to what was done in previous work. The lowest performing components had their concentrations fixed at their respective GM concentrations. Next a Folded/Un-Folded Plackett-Burman design was implemented with the remaining components at the upper and lower bounds of the design problem. This was done to determine the first order linear effects of each component on the objective function α. A linear model to predict α was used in conjunction with a LASSO algorithm to rank the most important first order effects, and all but the highest impact components were kept at their GM concentrations. Finally, the remaining components were used to design a Central Composite Design where experiments are spread out across the design space to more thoroughly explore potential optimal designs.The best α design from this DOE method was considered the optimal DOE design. The DOE-LOO step identified Ferric Nitrate, MgSO4, Glycine, L-Isoleucine, Choline Chloride, Riboflavin, and Thiamine HCl as components that, when left out of GM, had no statistical effect on %AB after 48 hr post-seeding . These components were set to their respective GM concentration for all subsequent DOE experiments. Next, the DOE-PB with LASSO identified the six most α-important components of the remaining 23 components . To reduce the number of experiments for the DOE-CCD design, LCystine and L-Serine were kept constant at × 1/2 normalized units above and below their GM midpoint concentrations respectively based on the sign of their coefficients . The remaining four components in the CCD had their upper/lower bounds changed to × 1/2 normalized units above and below their GM midpoints. The remaining components were varied in a CCD design, with the best medium being 200 mg/L KCl, 388 mg/L L-Glutamine, 9000 mg/L Glucose, 5% FBS shown in detail in Table 3.1. An 80% increase in α at 48 hr post-seeding over GM was measured using 50% less FBS than GM. For the HND optimization loop, α was used as the objective function and calculated using %AB measured at 48 hr post-seeding at 96 well plate scale . The RBF was initially trained with 30 randomly selected experiments. Figure 3.2 shows that the average HND designs improved in both α and %AB metric over time quickly overcoming standard GM and achieving similar results to the best DOE design with 70 experiments. We have included the proliferation metric in Figure 3.2 for completeness even though it was not used as the objective function α in this work. The HND was stopped at 70 experiments because both %AB and α stopped improving. The best medium found had an α measured to be 56% better than GM during the optimization loop using 32.5% less FBS than GM. Figure 3.3 shows the differences between the optimal media. For the most part the HND identified optimal concentrations that were slightly elevated compared to DOE, except for KCl, FBS, and Glucose. It is also notable that both HND and DOE determined that Glucose and FBS should be elevated and reduced in relative to GM. Figure 3.4 shows the media efficiency metric α plotted against the component concentrations for all experiments, demonstrating the nonlinear, interactive, and ultimately non-trivial nature of this experimental design optimization problem. These α-optimal HND and DOE designs were then tested against GM using %AB at 24, 48, and 72 h post-seeding , where the designed media have high %AB relative to GM but that advantage is reduced over time. As a further check, α was calculated using raw cell number normalized by the volume of FBS in each experiment where it was found HND and DOE again outperformed GM in terms of the objective function α due to their lower cost. However, both HND and DOE produced 8% and 9% fewer cells respectively, using 70 and 103 total experiments respectively. It is notable that, despite 30 components used, the HND was able to design a similar media to DOE with a similar degree of proliferation %AB and α in fewer experiments. Additionally, this DOE was more efficient than any single DOE, suggesting that the HND is much more efficient and simpler to use than the typical approach to high dimensional optimization. This is valuable in optimizing media due to the difficulty in collecting large amounts of data with many components. The reasons for the success of this method are likely the balance between global and local optimization, and the ability of the HBD to accumulate information using the RBF, which can regress on nonlinear, noisy, and interaction-heavy problems, reducing the need for cumbersome dimensionality-reduction experiments used in the traditional DOE. For the most part HND suggested higher concentrations of most media components than GM or DOE, except for KCl, FBS, and Glucose. This is likely because the DOE method utilized dimensionality reduction. That is, factors that demonstrated insignificant effects were fixed at their GM level and no longer included in the optimization. On the other hand, HND could vary components throughout the optimization process, including increasing component concentrations when they had even a small positive effect. Inclusion of a per component cost might dampen this effect. While the RBF can model nonlinear and interactive processes, the effect of each component on α is unclear without further experiments or model validation, a disadvantage of the HND approach. Nonetheless, sensitivity analysis using VARS was conducted and indicates FBS, Glucose, and MgSO4 likely have a significant effect on α, while other effects are more difficult to determine with the limited data available.

A conventional NIH-supported clinical study was conducted subsequent to first deployment

Although new facility construction or repurposing/ re-qualification may not immediately help with the current pandemic, given that only existing and qualified facilities will be used in the near term, it will position the industry for the rapid scale-up of countermeasures that may be applied over the next several years. An example is the April 2020 announcement by the Bill & Melinda Gates Foundation of its intention to fund “at-risk” development of vaccine manufacturing facilities to accommodate pandemic-relevant volumes of vaccines, before knowing which vaccines will succeed in clinical trials. Manufacturing at-risk with existing facilities is also being implemented on a global scale. The Serum Institute of India, the world’s largest vaccine manufacturer, is producing at-risk hundreds of millions of doses of the Oxford University COVID-19 vaccine, while the product is still undergoing clinical studies.12 Operation Warp Speed 13 in the United States is also an at-risk multi-agency program that aims to expand resources to deliver 300 million doses of safe and effective but “yet-to be-identified” vaccines for COVID-19 by January 2021, as part of a broader strategy to accelerate the development, manufacturing,pot with drainage holes and distribution of COVID-19 countermeasures, including vaccines, therapeutics, and diagnostics. The program had access to US$10 billion initially and can be readily expanded. As of August 2020, OWS had invested more than US$8 billion in various companies to accelerate manufacturing, clinical evaluation, and enhanced distribution channels for critical products.

For example, over a period of approximately 6 months, OWS helped to accelerate development, clinical evaluation , and at-risk manufacturing of two mRNA based COVID-19 vaccines, with at least three more vaccines heading into advanced clinical development and large-scale manufacturing by September/October 2020.At the time of writing, no PMP companies had received support from OWS. However, in March 2020, Medicago received CAD$7 million from the Government of Quebec and part of the Government of Canada CAD$192 million investment in expansion programs , both of which were applied to PMP vaccine and antibody programs within the company.15Once manufactured, PMP products must pass quality criteria meeting a defined specification before they reach the clinic. These criteria apply to properties such as identity, uniformity, batch-to-batch consistency, potency, purity, stability , residual DNA, absence of vector, low levels of plant metabolites such as pyridine alkaloids, and other criteria as specified in guidance documents . Host and process-related impurities in PMPs, such as residual HCP, residual vector, pyridine alkaloids from solanaceous hosts , phenolics, heavy metals , and other impurities that could introduce a health risk to consumers, have been successfully managed by upstream process controls and/or state-of-the-art purification methods and have not impeded the development of PMP products . The theoretical risk posed by non-mammalian glycans, once seen as the Achilles heel of PMPs, has not materialized in practice. Plant-derived vaccine antigens carrying plant-type glycans have not induced adverse events in clinical studies, where immune responses were directed primarily to the polypeptide portion of glycoproteins . One solution for products intended for systemic administration, where glycan differences could introduce a pharmacokinetic and/or safety risk , is the engineering of plant hosts to express glycoproteins with mammalian-compatible glycan structures .

For example, ZMapp was manufactured using the transgenic N. benthamiana line ΔXT/FT, expressing RNA interference constructs to knock down the expression of the enzymes XylT and FucT responsible for plant-specific glycans, as a chassis for transient expression of the mAbs . In addition to meeting molecular identity and physicochemical quality attributes, PMP products must also be safe for use at the doses intended and efficacious in model systems in vitro, in vivo, and ex vivo, following the guidance documents listed above. Once proven efficacious and safe in clinical studies, successful biologic candidates can be approved via a BLA in the United States and a new marketing authorization in the EU.In emergency situations, diagnostic reagents, vaccine antigens, and prophylactic and therapeutic proteins may be deployed prior to normal marketing authorization via fast-track procedures such as the FDA’s emergency use authorization .16 This applies to products approved for marketing in other indications that may be effective in a new emergency indication , and new products that may have preclinical data but little or no clinical safety and efficacy data. Such pathways enable controlled emergency administration of a novel product to patients simultaneously with traditional regulatory procedures required for subsequent marketing approval. In the United States, the FDA has granted EUAs for several diagnostic devices, personal protective devices, and certain other medical devices, and continuously monitors EUAs for drugs. For example, the EUA for chloroquine and hydroxychloroquine to treat COVID-19 patients was short-lived, whereas remdesivir remains under EUA evaluation for severe COVID-19 cases. The mRNA-based SARS-CoV-2 vaccines currently undergoing Phase III clinical evaluation by Pfizer/BioNTech and Moderna/ NIAID, and other vaccines reaching advanced stages of development, are prime candidates for rapid deployment via the EUA process. No PMPs have yet been granted EUA, but plant-made antibodies and other prophylactic and therapeutic APIs may be evaluated and deployed via this route. One example of such a PMP candidate is griffithsin, a broad-spectrum antiviral lectin that could be administered as a prophylactic and/or therapeutic for viral infections, as discussed later.

The FDA’s EUA is a temporary authorization subject to constant review and can be rescinded or extended at any time based on empirical results and the overall emergency environment. Similarly, the EU has granted conditional marketing authorisation to rapidly deploy drugs such as remdesivir for COVID-19 in parallel with the standard marketing approval process for the new indication.The regulations commonly known as the animal rule 17 allow for the approval of drugs and licensure of biologic products when human efficacy studies are not ethical and field trials to study the effectiveness of drugs or biologic products are not feasible. The animal rule is intended for drugs and biologics developed to reduce or prevent serious or life-threatening conditions caused by exposure to lethal or permanently disabling toxic chemical, biological, radiological, or nuclear substances. Under the animal rule, efficacy is established based on adequate and well-controlled studies in animal models of the human disease or condition of interest,large pot with drainage and safety is evaluated under the pre-existing requirements for drugs and biologic products.As an example, the plant-derived mAb cocktail ZMapp for Ebola virus disease, manufactured by Kentucky Bioprocessing for Mapp Biopharmaceutical 18 and other partners, and deployed during the Ebola outbreak in West Africa in 2014, was evaluated only in primates infected with the Congolese variant of the virus , with no randomized controlled clinical trial before administration to infected patients under a compassionate use protocol . Although the fast-track and streamlined review and authorization procedures described above can reduce time-to-deployment and time-to-approval for new or repurposed products, current clinical studies to demonstrate safety and efficacy generally follow traditional sequential designs. Products are licensed or approved for marketing based on statistically significant performance differences compared to controls, including placebo or standards of care, typically generated in large Phase III pivotal trials. One controversial proposal, described in a draft WHO report , is to accelerate the assessment of safety and efficacy for emergency vaccines by administering the medical intervention with deliberate exposure of subjects to the threat agent in a challenge study.

Although the focus of the WHO draft report was on vaccines, the concept could conceivably be extended to non-vaccine prophylactics and therapeutics. Results could be generated quickly as the proportion of treated and control subjects would be known, as would the times of infection and challenge. Challenge studies in humans, also known as controlled human infection models or controlled human infection studies , are fraught with ethical challenges but have already been used to assess vaccines for cholera, malaria, and typhoid . The dilemma for a pathogen like SARS-CoV-2 is that there is no rescue medication yet available for those who might contract the disease during the challenge, as there was for the other diseases, putting either study participants or emergency staff at risk .In the EU, the current regulatory environment is a substantial barrier to the rapid expansion of PMP resources to accelerate the approval and deployment of products and reagents at relevant scales in emergency situations. A recent survey of the opinions of key stakeholders in two EU Horizon 2020 programs , discussing the barriers and facilitators of PMPs and new plant breeding techniques in Europe, indicated that the current regulatory environment was seen as one of the main barriers to the further development and scale-up of PMP programs . In contrast, regulations have not presented a major barrier to PMP development in the United States or Canada, other than the lengthy timescales required for regulatory review and product approval in normal times. Realizing current national and global needs, regulatory agencies in the United States, Canada, the EU, and the United Kingdom have drastically reduced the timelines for product review, conditional approval, and deployment. In turn, the multiple unmet needs for rapidly available medical interventions have created opportunities for PMP companies to address such needs with gene expression tools and manufacturing resources that they already possess. This has enabled the ultra-rapid translation of product concepts to clinical development in record times – weeks to months instead of months to years – in keeping with other high-performance bio-manufacturing platforms. The current pandemic situation, plus the tangible possibility of global recurrences of similar threats, may provide an impetus for new investments in PMPs for the development and deployment of products that are urgently needed.An effective vaccine is the best long-term solution to COVID-19 and other pandemics. Worldwide, governments are trying to expedite the process of vaccine development by investing in research, testing, production, and distribution programs, and streamlining regulatory requirements to facilitate product approval and deployment and are doing so with highly aggressive timelines . A key question that has societal implications beyond vaccine development is whether the antibody response to SARS-CoV-2 will confer immunity against re-infection and, if so, for how long? Will humans who recover from this infection be protected against a future exposure to the same virus months or years later? Knowing the duration of the antibody response to SARS-CoV-2 vaccines will also help to determine whether, and how often, booster immunizations will be needed if the initial response exceeds the protection threshold . It is clear that some candidate vaccines will have low efficacy , some vaccines will have high efficacy , and some will decline over time and will need booster doses. An updated list of the vaccines in development can be found in the WHO draft landscape of COVID-19 candidate vaccines.As of August 2020, among the ~25 COVID vaccines in advanced development, five had entered Phase III clinical studies, led by Moderna/NIAID, Oxford University/Astra Zeneca, Pfizer/ BioNTech, Sinopharm, and Sinova Biotech.20 Most of these candidates are intended to induce antibody responses that neutralize SARS-CoV-2, thereby preventing the virus from entering target cells and infecting the host. In some cases, the vaccines may also induce antibody and/or cellular immune responses that eliminate infected cells, thereby limiting the replication of the virus within the infected host . The induction of neutralizing antibodies directed against the SARS-CoV-2 spike glycoprotein is considered a priority. The immunogens used to elicit neutralizing antibodies are various forms of the S protein, including the isolated receptor-binding domain . The S protein variants can be expressed in vivo from DNA or mRNA constructs or recombinant adenovirus or vaccinia virus vectors, among others. Alternatively, they can be delivered directly as recombinant proteins with or without an adjuvant or as a constituent of a killed virus vaccine . Many of these approaches are included among the hundreds of vaccine candidates now at the pre-clinical and animal model stages of development. Antibody responses in COVID-19 patients vary greatly. Nearly all infected people develop IgM, IgG, and IgA antibodies against the SARS-CoV-2 nucleocapsid and S proteins 1–2 weeks after symptoms become apparent, and the antibody titers remain elevated for at least several weeks after the virus is no longer detected in the convalescent patient . The nature and longevity of the antibody response to coronaviruses are relevant to the potency and duration of vaccine-induced immunity. By far the most immunogenic vaccine candidates for antibody responses are recombinant proteins .

Agricultural impacts from climate change are rooted in complex pathways

Trenberth et al. indicate that annual precipitation has decreased in the southwestern United States for the period 1901–2005. Consistent with scientific theory, empirical research suggests that warmer climates, such as those projected for the Southwest, will lead to more extreme precipitation intensity and frequency , particularly during the winter season . Since annual precipitation is projected to decline , more extreme events do not translate into higher total rainfall for a given year. Instead, it is projected that light precipitation — an important source for soil moisture and groundwater recharge — will concomitantly decline. Between 1901 and 2010, the areal extent of drought increased in the southwestern United State . Some have attributed the increasing expanse of drought, particularly in the previous decade, to warmer temperatures . Others have suggested that it is due to changes in atmospheric circulation . In addition to temperature and precipitation, CO2 fertilization is another climate change pathway affecting agriculture. Increased atmospheric carbon dioxide stimulates photosynthesis, leading to increased plant productivity and decreased water and nutrient use . Benefits from elevated CO2 concentrations depend upon plant type and irrigation level. C3 photosynthetic plants will benefit more than C4 plants , and dry land cropping systems will benefit more than irrigated systems . The extent to which CO2 fertilization mitigates climate-induced water scarcity in the field still lacks scientific consensus,round plastic plant pot and there is debate on the extent to which simulating CO2 effects actually reproduces the results in free air carbon dioxide enrichment experiments .

Assessments of crop impacts due to climatic change fall under two, broad categories: process-based and statistical models. Process-based models simulate physiological development, growth and yield of a crop on the basis of interaction between environmental variables and plant physiological processes . Statistical crop models impute a relationship between historic crop yield and climate variables, often in order to project the impact on yield under future climate scenarios. Process-based models remain the gold standard in crop modeling as one is able to study the relationship between weather and all phases of crop growth in a range of weather possibilities, even those lying outside the historical record . California field crops have been modeled using DAYCENT . Both studies highlight resilience of alfalfa yield under A2 scenario by end of the century, whereas 5 other crops exhibit a decline. Jackson et al. also find alfalfa yield to be particularly resilient to early and repeated heat waves during May–July. Lee et al. also run climate projections with and without a CO2 fertilization effect on seven field crops in the Central Valley of California. They assume a CO2 increase of 350 ppmv from 1990 levels enhances net primary production by 10% for all crops except alfalfa and maize. They find that CO2 fertilization increases crop yields 2–16% above the model without CO2 effects under the high-emissions scenario by the end of the 21st century. There is a much smaller yield increase under the low-emissions scenario. Lobell and Field use two estimation methods in studying the effects of temperature and precipitation on perennial crop yields. Their model includes 72 potential weather predictor variables for each crop, such as monthly averages for max and min temperature and their corresponding squares. They find that cherries and almonds are harmed by future warming out of a set of 20 perennial crops in their analysis. Crop-level adaptations — such as adjusting the planting and harvesting date , and substituting between different crop varieties — have been included to a limited extent in crop models. However, these cannot account for the broad range of decision making at the farm-level under which many of the negative effects of climate change could be partially offset with input and output substitutions, improving information, and effective water institutions. Thus, economic models are necessary to capture a broader range of responsive decision-making as the climate changes.

Recently, adaptations specific to California agriculture have been studied using three economic programming models: the Statewide Agricultural Production model, Central Valley Production Model , and the US Agricultural Resources Model . Capturing the decision-making process is an important part of modeling. In programming models, the farmer’s decision is captured by the objective function. The main decision variable in these models is acres of land allocated to a region-specific crop mix. The farmer responds to reductions in water availability and yield by adjusting crop acreage. Exogenous adaptations include institutional , socioeconomic , and technological change . Calibration through positive mathematical programming also captures decision-making by preserving observed crop mix allocation decisions . SWAP employs a PMP cost function to the capture the decision of bringing an additional unit of land into production . Both CVPM and USARM have also been calibrated using PMP . CVPM studies have also generated synthetic crop share data from Monte Carlo runs using a base water supply and groundwater depth with random perturbations. Crop adaptation equations are then derived from a multi-nomial logit regression of this CVPM-generated synthetic crop share data . In order to represent climate-induced changes in water supply, many mathematical programming models are linked to hydrological management models, such as the California Value Integrated Network , Water Evaluation and Planning , CalSim-II, and C2VSim. CALVIN is a generalized network flow-based optimization model that minimizes economic operating and scarcity costs of water supply, subject to water balance, capacity, and environmental constraints for a range of operational and hydrologic conditions . CALVIN has the potential to incorporate several basin-level adaptations to water allocation rules such as contract changes, markets and exchanges, water rights, pricing, and water scarcity levels.

However, it has limited ability to represent important physical phenomena, such as stream-aquifer interactions and groundwater flow dynamics under different climate and water management scenarios . WEAP has many of the same water management features as CALVIN and CalSim-II. WEAP includes demand priorities and supply preferences in a linear programming framework to solve the water allocation problem as an alternative to multi-criteria weighting or rule-based logic. It is different because analysis in the WEAP framework comes directly from the future climate scenarios and not from a perturbation of historical hydrology as with the other models. Unlike CALVIN and CalSim-II, WEAP only has a simplified representation of the rules guiding the State Water Project and Central Valley Project systems . CalSim-II is also very similar to CALVIN and WEAP . C2VSim is a multi-layer,25 liter round pot distributed integrated hydrologic model that could represent pumping from multiple aquifer layers, effects on groundwater flow dynamics, and stream-aquifer interaction . Recent programming studies focus on how certain adaptations may affect costs under relatively extreme cases of water scarcity. These studies thus assess how these adaptations may offset costs under worst-case-scenarios of water supply reductions. Given that reduction in statewide agricultural water use due to the current drought is estimated at 6% , studies on 40–70% flow reduction should be interpreted with caution. The subsequent studies are organized according to magnitude of water supply/flow reduction. Studies on 5–6% reduction in water supply reveal the heavy fallowing and groundwater use . Howitt et al. find that a 6.6 maf deficit in surface water caused by the current drought is largely substituted by 5.1 maf of additional groundwater. This is estimated to cost an additional $454 million in pumping. In addition to over-pumping groundwater, farmers adjust by fallowing crop land. The overwhelming majority of the 428,000 acres estimated fallowed in 2014 are in the Central Valley, where the majority of fallowed acres belong to field crops. However, they project that fallowing will decrease by 43% by 2016, suggesting a trend toward stabilization. Frisvold and Konyar use USARM to examine the effects of a 5% reduction in irrigation water supply from the Colorado River on agricultural production in southern California. In particular, they are able to compare the potential value-added of additional adaptations that includechanging the crop mix, deficit irrigation, and input substitution to a “fallowing only” model. They find that these additional adaptations have the potential to reduce costs of water shortages to producers by 66% compared to the “fallowing only” model.1 Medellin-Azuara et al. examine the extent to which more flexible2 versions of California water markets could reduce water scarcity costs under a 27% statewide reduction in annual stream flow. They compare agricultural water scarcity in the year 2050 under two scenarios: 1. Baseline: population growth and resulting levels of agriculture to urban land transfer.

Warm-dry: includes population pressure and climatic changes under GFDL CM2.1 A2). Under the warm-dry scenario, even with optimized operations, water scarcity and total operational costs increase by $490 million/year, and statewide agricultural water scarcity increases by 22%. If water markets are restricted to operate only within the four CALVIN sub-regions, statewide water scarcity costs increase by 45% and 70% for the baseline and warm-dry scenarios, respectively. Marginal opportunity costs of environmental flows increase under the warm-dry scenario, with particularly large percentage increases for the Delta Outflow and American River. Medellin-Azuara et al. conduct a similar analysis, adding the comparison with a warm-only 2050 scenario. The agricultural sector water scarcity costs rise by 3% from the baseline to warm-only scenario, versus an increase of 302% from the baseline to the warm-dry scenario.3 Indeed the greater hydrological impact of the warm-dry scenario results in significantly greater scarcity costs than the warm-only scenario. Using the CALVIN model runs from Medellin-Azuara et al. , MedellinAzuara et al. analyze adaptations at the farm-level, including adjustments in crop acreage , and to a more limited extent, yield-enhancing technology . Similar to the 2008 paper, the model compares economic losses between a baseline scenario and a warm-dry scenario . Results reveal an anticipated decline in acreage of low-value crops , which is particularly severe due to the large reduction in water availability. For example, pasture acreage is reduced by 90% across 3 out of 4 agricultural regions. The results also suggest that statewide agricultural revenues decline at a proportionately lower level than the reduction in water availability . Their model also captures the complexity between crop demand and climate-induced supply reduction. Although the demand for high-valued orchard crop increases, production decreases due to the negative impact on yield from temperature increases.The resulting price increase cannot compensate for the decrease in supply, and gross revenue still declines. Two studies examine the impacts of more extreme reductions in water supply . Harou et al. construct a synthetic drought in 2020 based on the paleo-record, rather than GCM projections. Their results regarding agricultural water scarcity and environmental flows are consistent with other CALVIN-SWAP studies. Environmental flows are also extremely restricted. Marginal opportunity costs of environmental flows rise by one or more orders of magnitude with extreme drought as compared to the historic baseline, with the Trinity, Clear Creek, and Sacramento Rivers experiencing the highest increase. Average agricultural water scarcity increases 3900% across the entire state under extreme drought even under well-functioning water markets, which seems somewhat implausible and may result from an overly restrictive model. Although Dale et al. do not calculate scarcity costs, they find that a 60-year drought with 70% reduction in surface flows only moderately impacts the total amount of irrigated acreage in the Central Valley, which declines from 2.4 million hectares to 2.1 million. This suggests that Central Valley farmers tend to have a relatively inelastic groundwater demand, compensating for the loss in surface water with groundwater rather than fallowing. Within the Valley, they find that Tulare Basin has a greater increase in fallowing than the San Joaquin Basin since the former is historically more dependent on groundwater. Dale et al. are also able to capture the increase in aquifer subsidence due to increased withdrawals during the prolonged drought, suggesting that the quality of the aquifer will decline through time with excessive pumping. Joyce et al. use WEAP-CVPM to model climatic changes with 6 GCMs under B1 and A2 scenarios for 2006–2099. Unlike the CALVIN-SWAP studies, they model irrigation efficiency by assuming that vegetable and fruit and nut crops in the Central Valley will be entirely converted to drip irrigation, and half of field crops will be converted by mid-century. They find that these adaptations tend to offset increasing water demands caused by increasing temperatures and periods of drought.

The EC played a key role in pushing for the single undertaking condition for the Uruguay Round

Essentially, a direct income payment from the government could weaken, if not break, the relationship between farmers and the JA because farmers would be paid independent of production and would thus be less dependent on JA services. Finally, and perhaps most importantly, the DPJ could not overcome the opposition of the farmers, and the JA more broadly, to the DPJ’s position in support of the Trans-Pacific Partnership, aiming to reduce trade barriers. Farmers, protected by high tariff barriers, feared that an influx of low-priced agricultural goods would follow the adoption of the TPP. The JA stated an official position of opposition to the TPP and those who supported it, no matter their party affiliation. In the 2012 election, the JA published a list of the 177 candidates it endorsed, 162 of which were from the LDP. Of the 177 officially endorsed, 173 were elected . As these examples demonstrate, my framework for studying agricultural policy making and reform can provide help provide a fuller understanding of decision making in domains outside of Europe. Japanese farmers have repeatedly shown the ability to defend preferred policies, defeat unwanted reforms, and even silence those who advocate economic liberalization, whether a powerful political party or a major industry. As in Europe, it is difficult if not impossible to take support away from farmers or even to challenge their policy preferences.This third and final mini case tests the applicability of my argument to cases that involve agricultural interests but are not agricultural policy proper. Additionally, this mini case tests my argument beyond the European/EU context.

Decision making occurs at the supranational level, and, beginning with the 1986 Uruguay Round,25 liter plant pot agricultural interests are just one set of voices within a much broader set of voices. Essentially, in the case of world trade after 1986, agriculture cannot simply sort out its own situation in isolation, excluding all other interests. Because these negotiations are supranational, like CAP negotiations, farmer organizations and their influence are predominantly mediated through national representatives to the GATT meetings. Essentially, the task of this mini-case is to demonstrate that the major claims of my argument still hold under the conditions outlined above. When GATT was created in 1948, agriculture received special treatment. It was thought that agricultural interests were so powerful and agriculture such a touchy national subject that its inclusion would render any negotiations dead in the water. So unlike manufacturing sectors, agriculture was exempted from the prohibition on the use of both quantitative import restrictions and export subsidies. In addition, agriculture was left out of the first three rounds of multilateral trade talks in the GATT in order to assure successful negotiations. As a result of agriculture’s special treatment and its absence from GATT negotiations, domestic agricultural programs were allowed to develop unchecked. The resulting agricultural surpluses were one of the major factors that pushed agriculture to be fully included in GATT multilateral negotiations despite major concerns over the dilatory effects of powerful farming interests and the objections that would certainly be raised by negotiating parties in defense of their particular agricultural profile. The centerpiece of the GATT Uruguay Round negotiations was the section on agriculture. The GATT UR was launched in 1986 and was supposed to be concluded by 1990. Due to delays on the agricultural section of the negotiations, an agreement was not reached until 1993, almost doubling the length of the GATT UR. The declaration launching the Uruguay Round identified greater liberalization in agricultural trade as the fundamental goal of the round.

Particular attention was to be paid to domestic support, market access, and export subsidies . The specific goals for agriculture were to reduce import barriers, to order to improve market access, and to restrict the use of direct and indirect subsidies in order to improve the competitive environment. US Trade Representative Clayton Yeutter insisted on the inclusion of policies relating to domestic support over the strong objection of some EC member states, most notably France. In short, in the GATT UR, reformers wanted to remove protectionist trade barriers and dramatically cut, if not completely eliminate, subsidies for agriculture, including those designed to boost farmer incomes. In a major break from previous GATT negotiations, the UR was to be treated as a “single undertaking” . In other words, the round could not be concluded without an agreement on agriculture. By contrast, the Tokyo Round was described as “GATT à la carte” because contracting parties could decide which agreements they did and did not want to sign . This change was made in an effort to finally force an agreement on agriculture. In all previous rounds, agriculture was either excluded entirely or treated under special, separate rules. For France, which was reluctant to include agriculture in the UR negotiations, the condition was particularly important because it “represented the potential for offsetting gains in other sectors: to rebalance trade with Japan and to ensure the newly industrializing countries, particularly in Asia, met in full their obligations under the GATT” . In practice, the single undertaking condition permitted agriculture to cause extensive delays in the negotiations, repeatedly proving to be the issue that blocked everything. Negotiations at the Uruguay Round took place over seven years. Throughout the talks, the US and EC advanced radically different negotiating positions. An inability to reach an agreement on agriculture resulted in the collapse of the round, and the original deadline for an agreement, 1990, was missed. Talks were revived by GATT Director General Arthur Dunkel and ultimately concluded in 1993 with an agreement that was dramatically watered down from the initial GATT UR objectives and was ultimately quite favorable to farmers. In the end, farmer income payments, which GATT officials sought to eliminate or at least restrict, were entirely preserved and the dismantling of tariff barriers was delayed or restricted such that most farmers felt little to no effect from these changes.

The Uruguay Round negotiations were driven by the sharply divergent positions of the United States and the European Community, supported by the Cairns Group43 and Japan, respectively. The US saw government support as the root of trouble in farm trade while the EC blamed the market. Specifically, the US called for dramatic liberalization, primarily by reducing the protection and support afforded to European farmers under the CAP. The EC, however, argued that that the aim of negotiations should be to “progressively reduce support to the extent necessary to reestablish balanced markets and a more market oriented agricultural trading system” but not to phase out support and protection . The US plan was highly aggressive and market oriented. Dubbed the “Zero Option”, it called for the complete elimination of farmer programs, described as “all forms of support which distort trade” within ten years . Export subsidies, which were considered by US negotiators to be the most trade distorting, were to be reduced by 90% in five years. In addition, no commodity or support program would be exempt from reform. However, programs, such as decoupled payments, which were not tied to output and thus were arguably not trade distorting, would remain untouched. In launching the Zero Option, US officials believed that American farmers would accept the subsidy reductions because foreign farmers would be subjected to the same pains at the same time. US farmers, however,black plastic plant pots preferred to avoid rather than share pain, and feared that any GATT reform would impose costs upon them. GATT negotiations came on the heels of two major failed attempts by the Reagan administration, in 1981 and 1985, to get Congress to reduce the levels of support for US farmers. Reagan administration officials viewed the GATT negotiations as an opportunity to use international negotiations to achieve domestic reform. Farmers had defeated these retrenchment efforts at home, so the Reagan administration sought to attempt to retrench agriculture in the context of international trade, which could potentially undercut the power of the American farmers.The EC flatly rejected the US proposal calling the plan “unrealistic”. It was, at its core, a thinly veiled attack on the CAP, a policy with which the US was becoming increasingly frustrated. The EC countered with a more modest proposal where reductions in support levels would occur only after measures were adopted to stabilize world prices. In the first stage, the most seriously imbalanced markets, cereals, sugars, and dairy products, would be stabilized. In what is likely not a coincidence, these commodities were also those in which the EC had massive surpluses. In the second stage, commodity supports would be reduced gradually by up to 30% over a period of ten years, with reductions calculated using 1986 as the base year. Overall, compared to the US, the EC fundamentally sought to maintain the agricultural status quo. Japan’s position was largely defensive and was grounded in a desire to make as few concessions as possible in negotiations . Of fundamental importance was to prevent or delay tariffication to the extent possible, specifically for rice, which is a hallowed product in Japan. Indeed, Japanese negotiators were willing to permit imports and increased tariffication in all other agricultural commodities so long as rice remained exempt from tariffication and import rules.

Japan’s existing agricultural policy and support system allowed them to support their farmers through a system of high prices made possible by a system of tariffs and isolation from the international market . By resisting tariffication, or gaining exemptions for the most important sectors, namely rice, Japan could avoid the situation that the EU found itself in, where domestic policy had to be reformed to make a final agreement possible. Japan’s objectives were shaped primarily by the special position of rice producers and also by the overall high level of protection of agriculture. In addition to a desire to avoid a ban on agricultural import quotas, the Japanese position emphasized the importance of “non-economic” objectives of food farm policy, including “food security, rural employment, and environmental protection” . In regards to the two major policy positions, that of the US and the EC, the Japanese supported the EC and flatly rejected the US position as impractical . After observing the EC and Japanese negotiators’ vehement rejection of the Zero Option proposal, the US farm lobby, including numerous commodity groups, realized they could use this opposition to their own benefit. Knowing that the EC and Japan would never accept the Zero Option, US farm lobbies began to support the plan in hopes that it would deadlock the negotiations. If the negotiations remained at an impasse, then American farmers would also continue to benefit from the subsidies, tariff barriers, and general support programs that the “Zero Option” plan sought to eliminate. Many US farm lobbies were quite aggressive, insisting that “they could accept nothing less than the Zero Option”, that “half measures would not do— no agreement was better than a bad agreement” and that the Zero Option was “the only way to guarantee a level playing field against their subsidized foreign competitors” . The US farmers’ manipulation of the Zero Option extended into the GATT Mid-Term Review , which took place in 1988. US farm groups, led by the highly subsidized sugar and dairy sectors, successfully lobbied Secretary of Agriculture Richard Lyng to “force Yeutter to stick to the Zero Option in Montreal” resulting in a stalemate at the MTR . Yeutter had, in the months leading up to the Uruguay Round’s MTR, expressed a willingness to accept partial reforms. In order to best protect their policy preferences, however, the American farmers pressured the US government to hold the line on a policy they knew would never be accepted by Europe. Like their European counterparts, American farmers rely on the power of their sophisticated organizations to advance and defend their policy preferences.Negotiations also floundered because the EC and Japan were losing interest in reform. In the case of the EC, the need for agricultural policy reform to bring spending costs under control was achieved when the EC reached an agreement in 1988 on two measures that brought about temporary budget relief: a stabilizer for cereal subsidies and a 25% increase in Community revenue. In Japan, it was electoral politics, not reform, that dampened the desire to negotiate.

The MFF delayed CAP reform while the budget was being negotiated

The finding also holds true for spending reduction, even under conditions of budgetary crisis. Third, this case highlights two tactics commonly employed by reformers seeking to retrench the welfare state: 1) introducing reforms that correct programs functioning inefficiently or unfairly and 2) corralling support for the reforms by simply buying off member states or by offering special policy exemptions and alternatives to those who are opposed to particular aspects of the reform package. The biggest achievement of the MTR, the introduction of an income support system that was not based on production, was achieved by constructing the reform around correcting an existing program that was inefficient and unfair rather than trying to get member states to agree to completely abandon the old system and support an entirely new policy. The old direct payment program was inefficient, environmentally destructive, and expensive. It incentivized farmers to produce as much as they possibly could no matter the cost to the environment and then stuck the EU with the bill for storage and dumping of surplus product. The old program was also unequal. Because payment was based on output, a small percentage of farmers received the majority of CAP payments. Moreover, those farmers who received the most payments were also those who were already internationally competitive and did not need CAP support. By ceasing to pay farmers based on how much they produced, the new income support system corrected the inefficiencies of the old program, eliminating incentives that encouraged environmentally damaging and wasteful over production,growing strawberries vertically and made progress toward a more egalitarian system of support payments.

The MTR saved the CAP budget and strengthened the position of the EU in the Doha Round of WTO negotiations. Full decoupling and a transition to area-based payments prevented the CAP budget from ballooning out of control once the ten new member states from Central and Eastern Europe joined the EU. The reform of the income support system also put the EU in compliance with WTO subsidy rules. By completing these reforms in advance of the Doha Round, the EU avoided a repeat of the GATT Uruguay Round when EU officials had to negotiate a trade deal under the burden of an agricultural support system that violated existing rules. Even with the extraordinary circumstances that opened the political ground for more reform, the costs of marshaling the support necessary to enact change were high. While the SFP was a dramatic change in how farmers were paid and marked a stronger commitment to the greening components of the CAP, this reform had little effect on the total amount of support received by farmers. For example, the proposal to cap the total amount of annual support any individual farmer could receive at €300,000 was defeated. Some version of this initiative has been put forward by agricultural commissioners since at least the 1980s and has been defeated in every round of reform. Even when conditions were ripe for major change, a pet policy of agricultural commissioners failed. Even in the best conditions, it is difficult if not impossible to retrench farmer support. These reforms to the payment system neither significantly affected the total level of support received by farmers nor resulted in much change in allocation of support between countries and across farmers . Under the Fischler reforms, inefficient farmers benefited from the decoupling of support from production; they would now be paid no matter how much they produced. While large and highly efficient farmers would no longer be able to drive up their support payments through efforts to produce as much as possible, they successfully avoided a limit being placed on their overall annual income payments. This victory was especially important for those farmers with the largest holdings.

Historic yield based payments meant that these farmers still stood to earn a hefty payment, and with the proposal to limit total CAP support defeated, those payments would not be garnished. Finally, while all farmers earning above the €5,000 threshold lost some money directly though modulation, these funds stayed almost entirely in their own member state’s rural development program, and could be channeled back to the farmers though other programs and initiatives. Ultimately, offering exemptions, exceptions, and monetary incentives was a crucial component in successfully concluding the reform negotiations. In order to wrangle the cooperation of farmers who were leery of the effects of new programs, including the costs and burdens imposed by newly adopted environmental standards, reformers repeatedly relented, offering them monetary compensation for compliance and lowering expectations for meeting environmental standards. The next chapter of my dissertation examines the most recent reform of the Common Agricultural Policy, agreed to in 2013. This reform, which established the framework of the CAP until 2020, is more similar to that of Agenda 2000 than the Fischler or MacSharry reforms. With no major outside pressures or crises, CAP reform reverted to politics as usual, and little change was made.As part of the Europe 2020 strategy, the 2013 CAP reform sought to make big changes to the CAP, bringing it in line with the modern, dynamic, and innovative European Union that the Commission envisioned. These reforms endeavored to address long-standing complaints that the CAP was too complex, unfair, and environmentally destructive. To that end, the oft-repeated mantra by CAP reformers was that they desired a CAP that was fairer, simpler, and greener. In reality, the 2013 CAP reform fell well short of these objectives. It made some progress on improving fairness, but also made the CAP far more complex and did little to improve environmental standards.

A major factor in explaining the limited changes that resulted from this CAP reform is that, other than the need to continue adjusting the CAP to operate in the enlarged European Union, there were no exogenous forces pushing for reform. Both MacSharry in 1992 and Fischler in 2003 used pressing concurrent challenges, such as stalled trade negotiations, to achieve major change. The same option to tie CAP reform to crises and/or concurrent problems was largely unavailable to Agricultural Commissioner Daclan Cioloș in 2013. The budget was not in crisis and the EU was not involved in any WTO negotiations. Enlargement was the one geo-political pressure that affected the 2013 reform. While enlargement to Eastern Europe had been concluded, the consequences for the CAP still required some management. The main issue lingering from the most recent enlargement was the imbalance of payments across countries. This issue proved to be the only source of disruptive politics,drainage planter pot providing Cioloș an opportunity to call for changes to the direct payment system. Indeed, the only major component of the final reform would be the provision that addressed the fallout from enlargement. The purpose of this chapter is to account for the content of the 2013 CAP reform and to explain why the reform was so underwhelming. Cioloș had a very limited mandate for reform, given that he was operating largely under politics as usual. In addition, the new CAP reform was being sorted out at the same time as the 2014-2020 Multi-annual Financial Framework . In addition, it undercut the ability of reformers to call for spending cuts or to use the threat of them to leverage reform because once the budget had been set, spending cuts were off the table. The result was a watered down reform, with changes much more circumscribed than initially proposed or abandoned entirely. The final agreement contained two main components. The first and most significant change involved the direct payment system. In order to address vast inequality in the payments received by Western and Eastern farmers, all member states would transition, over a 7-year period, to using the same system for calculating the amount of direct payments owed to their farmers. The program for this transition was ultimately made much more gradual and included far less redistribution than initially proposed. In addition, a proposal to cap income payments was rejected. Greening was the second component of the agreement. While new rules for permanent pasture, mandatory crop rotation, and other measures intended to protect and improve the environment were adopted, they ultimately had very little applicability, with nearly of 88% of farmers exempted. Smaller components of the agreement allowed member states to have more flexibility in directing money towards rural development and modified rules on who counted as a farmer, though the definition remained quite permissive. Once again CAP reform mirrored the process of welfare state retrenchment, with reformers employing a variety of tactics to slip through any reform and hopefully position themselves to achieve more substantial retrenchment in the future.

Nearly every proposal was significantly watered down, and some, like placing limits on the amount of money individual farmers received, were defeated outright. The core reforms, most notably changes to the direct payment system, followed a “vice into virtue logic”. The existing payment system was operating unequally. To address this problem, rather than creating a new program from scratch, reformers worked within the system, amending the method of calculating payments to decrease the gap between new and old member states. Finally, as is typical, the final package included a number of side-payments, concessions, and exemptions in order to facilitate the agreement. For example, greening measures designed to impose stricter environmental standards on a portion of a farmer’s land ended up including so many exemptions that only 12% of European farmers were ultimately subjected to the rules. In the end, this round of CAP reform amount to little more than tinkering around the edges. The 2013 CAP Reform illustrates the importance of disruptive politics for achieving meaningful CAP policy change. The only significant alteration to CAP policy in the 2013 reform was directly linked to the sole source of disruptive politics: enlargement. While no new rounds of accession were looming, the CAP confronted lingering problems from expansion to Eastern and Central Europe. Owing to different available methods for calculation and the rules governing the new member states’ accession to the CAP, the average payment per hectare in the old, Western member states was much higher than in the newer member states in Central and Eastern Europe. In 2013, the average payment per hectare across the EU was €269 . Farmers in Latvia, however, received on average €95 per hectare compared to €458 in the Netherlands . This inequality was politically unsustainable, with Central and Eastern European member states complaining about their second-class status. EU officials, specifically those outside of DGVI , had also taken notice and were increasingly critical of a policy that was badly out of step with the core EU value of equality among members. The other issues that had generated the disruptive politics in 1992 and 2003, economic crisis and stalled trade negotiations, did not drive reform in 2013. Europe did experience an economic crisis, but the crisis actually weakened the hand of reformers and budget cutters. The period immediately before and during CAP negotiations was marked by significant economic volatility. In 2008, agricultural prices peaked, then suddenly dropped as a consequence of the global economic crisis, which caused upheaval and uncertainty in government budgets and commodities markets. Concerns about falling prices increased the pressure on CAP policymakers to return to market intervention and regulation to help hard-hit farmers. In addition, the volatility in the markets induced calls to lessen or eliminate greening requirements so as to not overburden farmers and also to increase support for emergency relief. These proposals would require an increase in spending. Thus, instead of disrupting politics and providing Cioloș with an opening to call for change, the crisis buttressed politics as usual. Stalled trade talks, which were a key pressure for the MacSharry Reform, were not entirely absent from the 2013 CAP Reform. The difference between these two circumstances, however, was vast. At the time of the MacSharry Reforms, the GATT Uruguay Round was struggling to reach a conclusion. The problem was in the agricultural portion of the negotiations, with the design and operation of CAP programs preventing Europe and the United States from reaching an agreement. At the time of the 2013 CAP negotiations, the Doha Round was stalled. This time, however, the negotiations were failing in multiple sectors and agriculture, though one of the points of trouble, was not to blame for delaying the successful conclusion of the entire round.

It also maintained the system of direct income payments for decoupled production

Ultimately, Chirac was able to combine his agricultural expertise with a situation ripe for revision: a CAP agreement that was over budget, with no country willing to increase their contribution to the EU in order to close the gap. Chirac’s personal rewrite of the CAP agreement faced little resistance as he stood firm and the other major member states relented due to other considerations which France in turn did not oppose. The main changes imposed by Chirac included a smaller cut for grain prices, increased compensation for beef, and the extension of the milk quota regime by a further two years including an additional delay in price cuts. The UK was concerned with both protecting its rebate and, like Spain and the Southern countries, ensuring access to and increasing their share of cohesion funds. Chirac left Italy’s milk quota increase in the package, thereby ensuring their continued support for CAP reform. The biggest source of opposition should have come from Germany, but it was facing political crises domestically and also in its role in the rotating EU presidency. Schröder’s government was in disarray after finance minister Oskar Lafontaine’s abrupt resignation. At the EU-level, in its role in the rotating presidency, Germany was dealing with the crises in the Balkans, which had now escalated to a NATO bombing campaign and also with the stunning resignation of the entire European Commission, under pressure from the European Parliament . An independent investigation revealed widespread fraud, misconduct,vertical hydroponic farming mismanagement of financial systems, and suggested that “[the Commission] had lost control of the administration” of the EU 25.

A failure of the European Council to act decisively and to successfully conclude Agenda 2000 would be a major embarrassment for Germany and the entire European Union, which it was struggling to lead. With Germany politically weakened, the other major member states distracted by other priorities, and the EU struggling to appear decisive under the shadow of both an international and an institutional crisis, Chirac was able to amend Agenda 2000 to be more in line with his personal preferences after threatening to leave Berlin without concluding the negotiations. In the end, the overall reform was diluted even further largely by reducing the size of price cuts and delaying the timing of reform. This point is essential: the CAP budget was brought under control not by slashing farmer compensation, but by delaying cuts and reforms that would result in further market liberalization. Savings were generated not by reducing compensation packages but by delaying and/or reducing market reform. Price cuts were lowered from 20% to 15% for cereals and the delay of dairy reform was extended a further two years, until 2008. By making cereals cuts smaller than initially proposed and maintaining the dairy quota system the EU reduced and/or delayed the amount of money it would have to pay out as compensation in the form of direct payments. Instead, consumers would continue bear the cost of the price supports. Alternative solutions that would work within the budget, such as reducing the level of compensation or imposing payment ceilings, were rejected . The final agreement formally rejected both alternatives for budget stabilization though it did permit member states to apply modulation, if they so desired . In the end, all the key players left Agenda 2000 with their key interests protected: France avoided co-financing and protected the high levels of transfers it received from the CAP; Germany avoided further increasing its contribution to the CAP budget; the Southern countries protected their current level of structural funds ; and the UK protected the Thatcher rebate.

Overall, the cuts in the final agreement were smaller and took effect at a later date than in the initial proposal. Reducing the size of a proposed cut and/or delaying the time at which it will come into effect are strategies used by welfare state reforms. Reducing the size of a proposed cut is one way to buy off support for a reform while delaying the date of implementation allows the current policymakers to distance themselves from the negative consequences of the reforms they have adopted. The risk, of course, is that these reforms may never actually take place, as the delay affords future actors the opportunity to amend or entirely eliminate the reform before it takes place. Also similar to the process of welfare state retrenchment was the Commission’s use of targeted concessions, like increased milk quotas for some member states. Reform is achieved after support is bought from a key actor, in this case Italy, whose support was crucial to breaking the blocking minority on dairy reform. The Commission’s initial proposal was effectively watered down twice, once in order to reach a compromise in the Agricultural Council and a second time by French president Jacques Chirac at the Berlin Summit several weeks later. Chirac’s watering down of the proposal included reducing the price cut for grain from 20% to 15% and increasing the compensation paid to the beef sector. In addition, dairy reform, by way of finally removing the quotas, was delayed even further. The round of price cuts and compensation continued in the vein of the MacSharry Reform. Yet despite the cuts, EU prices for goods, particularly cereals, still remained well above world prices. As a result, the EU had to continue using export subsidies . The compensation specified in this reform was to be added on top of the compensatory payments already received under the MacSharry Reform.

While these cuts would allegedly help reduce the problem of surplus production by lowering price guarantees, the CAP still had yet to achieve the full decoupling of payments from production. Modulation was ultimately left up to the member states. If they desired,vertical gardening systems they could modulate payments to farmers for specific reasons such as, “below-average employment density, above-average profit level” or the total level of payments to the farm . The agreement required that any savings collected by the member states through this optional modulation had to be spent on rural development or environmental programs. Cross-compliance was adopted, but only as an optional measure. Member states that choose to impose cross-compliance on their farmers were also allowed to set their own environmental standards so long as they did not distort competition between the member states . While implementing cross-compliance in an optional, non-binding form with no universal environmental standards was a weak outcome, it did position the Commission well for future reform along the same lines. Moreover, it mirrored patterns observed in welfare state retrenchment, where seemingly small changes implemented early can smooth the way for more fundamental change, or systemic reform in the future. Indeed, in the 2003 Mid-Term Review of the CAP, Fischler was able to use the inclusion of cross-compliance in Agenda 2000 as a sign of tacit approval of the program and to successfully push for its adoption as a mandatory CAP program.The process of the Agenda 2000 CAP negotiations, particularly in contrast to the MacSharry and Fischler Reforms which bookended it, illustrates how CAP reform is more limited without the presence of disruptive politics to open space for broader reform. While trade was a driving factor for the MacSharry Reform, it exerted little pressure on shape or content of the Agenda 2000 CAP agreement. The MacSharry Reform negotiations took place not only while the GATT Uruguay Round was underway, but after agriculture had been identified as a major stumbling block in reaching an agreement. For Agenda 2000, there were no concurrent GATT/WTO talks , and the threat of a new round potentially beginning in the near future was not enough to push the CAP towards further liberalization and away from existing trade-distorting programs. Similarly, enlargement towards Eastern and Central Europe was far enough in the future that it had yet to become a critical issue for the CAP. Moreover, the Commission and the member states were operating under the belief that the new member states would simply be excluded from the direct payments system, meaning that it was not necessary to reform the CAP in preparation for enlargement.

The Agenda 2000 agreement serves as a clear example of what happens when negotiations occur during politics as usual. In such cases, reform outcomes are mostly narrow, limited, and/or non-binding. Emboldened by the sense of urgency lent to the CAP reforms by trade negotiations, impending enlargement, and/or other crises, both MacSharry and Fischler were able to make, and more importantly, deliver on bold proposals, resulting in a major reworking of the core operations of the CAP. Without these disruptive politics during the Agenda 2000 negotiations, the Commission was unable achieve the such systemic reforms. The process of reaching an agreement on Agenda 2000 CAP highlights three tactics commonly employed by reformers seeking to retrench the welfare state: 1) implementing small changes, including even non-binding agreements that allow for deeper structural change in the future; 2) using delayed implementation for cuts as a way to reach an agreement; and 3) marshalling support for the reform package via a number of tactics, including offering compensation in exchange for cuts or changes to programs, buying off member states , and offering special policy exemptions and alternatives to those who are opposed to particular aspects of the reform package. The first tactic is best illustrated by the greening policies, which were adopted but left to the member states to decide how they would implement them. In his 2003 reform, Fischler would use the fact that member states had previously agreed to greening standards as grounds for making these standards mandatory. The second tactic, delaying the initiation of cuts and reforms, was seen in the reforms for all three major sectors: cereals, beef, and dairy. While delays were ostensibly undertaken for the purpose of budgetary feasibility, they also helped politicians avoid blame, as they may no longer be in office when the cuts arrive. Finally, the third tactic, attempting to buy countries’ support with side payments is illustrated most clearly in the dairy sector, with the Commission offering Italy an increase in its milk quota in exchange for supporting the reform and thus eliminating the blocking minority that was preventing an agreement. While Agenda 2000 did not have a landmark initiative like MacSharry’s partial decoupling of payments from production and introduction of direct income payments, Agenda 2000 at least protected MacSharry’s legacy by continuing to cut existing intervention price supports. The compensation for price cuts was added to the existing direct payment scheme, as reformers continued to slowly push the CAP away from system based on market intervention to one that was constructed around income supports. Agenda 2000, also continued to develop the CAP’s second pillar which concerns rural development and greening. Overall, and particularly in comparison to the subsequent Fischler reforms of 2003, Agenda 2000 was unremarkable. It lacked any landmark initiative. Environmental measures, while included, were non-binding and left to member state discretion for the standards. Both options to fundamentally reshape how the CAP is funded, degressivity and co-financing, were abandoned. So too were a number of proposed reforms and cuts in the effort to bring the CAP package in line with the new budget. Many of these reforms were eliminated by Jacques Chirac when he re-opened the already watered down reform agreed to within the Agricultural Council at the Berlin Summit. With Agenda 2000 being not just a CAP reform, but a broad package of reforms designed to orient the EU towards, and prepare it for, the challenges of the new millennium, most proximately enlargement, the member states had to pick where to place their influence. While France used its political capital to go all in on reshaping the CAP to align with French preferences as much as possible, other countries defended different interests: Germany sought to prevent an increase in its budgetary contributions, Spain and the southern countries fought to protect cohesion funds, and the UK defended its rebate.The next chapter examines the third major effort to reform the CAP, the 2003 Mid-Term Review also known as the Fischler Reform. Fischler intended to use this reform to push forward and extend the decoupling of the CAP, first begun by MacSharry in 1992.