Only in Adaptation 3 are substantial marginal benefits observed in total demand over time

The main exception to this general trend is the near term of A2, which showed an unexpected lower frequency of no allocation years . Under the climate only scenarios, where land use is held constant at 2008 crop proportions, future irrigation demand is projected to increase in the District . In the near and medium term, average demand is expected to increase by 80 to 90 thousand acre feet, with no notable differences between the B1 and A2 projections . The increase in demand is expected to continue in the latter part of the century, were the warmer and drier A2 climate sequence ultimately prompts higher irrigation demand than B1 . Relative to the historical period, this is an increase in irrigation demand of approximately 26 to 32 percent due to climate alone. Increased demand and greater impact of the GFDL A2 scenario observed in this study are consistent with previous projections for the Sacramento Valley as a whole . Table 3.5 and Figure 3.5 compare the difference in irrigation demand among the three adaptation scenarios relative to the historic period and climate only scenarios. Under Adaptation 1, demand varies to a small extent above and below the zero lines . This suggests two things. First, it indicates that A2 and B1 cropping patterns predicted by the econometric model, which are based on historic weather and market drivers, have less impact on irrigation demand than climate change alone. For example, increases in demand from climate alone are on the order of tens of thousands of acre feet, while the relative impact of Adaptation 1 is only a few thousand of acre‐feet . Second, since demand in the B1scenario shows a slight increase with Adaptation 1,grow bag for blueberry plants the cropping trend projected by econometric model may be less water efficient than the current cropping pattern.

In short, the econometric model predicts a cropping pattern that is likely to be the most economical or profitable in the short‐term rather than what might be the most water efficient. Differences between the A2 and B1 climate sequences highlight this possibility. Since the econometric model predicted similar cropping patterns for B1 and A2 prior to 2036, irrigation demand was also similar. However beginning in 2036, the acreage of alfalfa expands significantly under the B1 climate . Since alfalfa has high water requirements, its expanded acreage leads to a corresponding increase in total irrigation demand for B1 relative to A2 and the historic period . Adaptation 2 also shows increased demand compared to the historical baseline across all periods and emissions scenarios . However, the model indicates that the increase in demand can be minimized to some extent by shifting to a more diverse and water efficient cropping pattern. That said, the marginal savings towards the end of the century are still less than half of the increase in demand due to climate change alone . Adaptation 3 also shows a near‐term demand slightly greater than the historical period. However, as the diversified cropping pattern and improvements in irrigation technology are gradually implemented, far‐term demand declines to approximately 12 percent less than the historical mean for both the B1 and A2 climate sequences . This illustrates that “game‐changing” water savings—savings of the same order of magnitude of climate‐ induced increases—can occur through a combination of progressive irrigation technology improvement, and cropping patterns which are more water efficient and diversified.Because of an overall increase in irrigation demand, groundwater pumping also tends to increase in the far term under both the B1 and A2 climate . Under A2, the groundwater proportion of the District’s supply rises from a historical mean of around 49 percent in the near term to as high as 61 percent in the far term .

It should be mentioned that this historic estimate includes years prior to the operation of Indian Valley reservoir, thus the present fraction is somewhat lower than 49 percent. Overall, this corresponds to a volume of 118 thousand acre feet above the historical mean . Relative to the climate only scenarios, the marginal benefits of Adaptation 1 and Adaptation 2 are somewhat limited in the near and mid term . In short, by integrating cropping pattern changes and improvements in irrigation technology, groundwater pumping was maintained at levels close to the baseline in the near term and yielded reductions of 30 to 50 TAF in the far term. The survey of growers indicates that these are types of practices that growers foresee as potential adaptation measures in the future . Groundwater pumping, and building more pumps and wells, are adaptation practices that farmers seem likely to adopt in the future, and these are discussed further in Section 5.With the passage of the Global Warming Solutions Act of 2006 ,12 California has shown, in the absence of cohesive federal leadership, that local governments are able to adopt a bottom‐up approach to greenhouse gas mitigation . Specific targets set by AB 32 aim to reduce California’s GHG emissions to 1990 levels by 2020 and a further 80 percent by 2050. Recognizing the key role that land‐ use planning will play in achieving these goals, legislators also passed Senate Bill 375 13 in 2008, which requires regional administrative bodies to develop sustainable land‐use plans that are aligned with AB 32 . Agriculture currently occupies 25.4 percent of California’s total land area and generates approximately 6 percent of the state’s total GHG emissions . By contrast, urban areas in California makeup only 4.9 percent of the land area but are the primary source of the state’s transportation and electricity emissions, estimated at 39 percent and 25 percent, respectively .

Moreover, rapid urbanization in California has contributed to the loss of nearly 3.4 million acres of farmland over the last decade and has increased the emissions associated with urban sprawl . At present, AB 32 does not require agricultural producers to report their emissions or to implement mandatory mitigation measures as it does for California’s industrial sector . The state is, however, encouraging farmers to institute voluntary mitigation strategies through various public and private incentive programs . For example, voluntary mitigation projects within California’s agriculture and forestry sectors may be permitted to sell offset credits in a carbon market that has been proposed in the scoping plan laid out by the California Air Resources Board . While CARB and other state agencies have taken the lead in defining these policies, much of the responsibility for climate change planning and policy implementation has been delegated to local governments. For instance, AB 32 and SB 375 now require local governments to either address greenhouse gas mitigation in the environmental impact report that accompanies any update to their general plan or to carry out a specific “climate action plan” filed separately . Consequently, conducting an inventory of GHG emissions is now among the first steps taken by local governments as they plan for future development. To help local governments improve the quality and consistency of their emissions inventories, CARB has collaborated with several organizations to develop tools to standardize inventory methods. For example, the International Council on Local Environmental Initiatives has developed a software package known as the Clean Air Climate Protection Model to better align local methods with national and international standards . Such inventory tools are suitable for appraising emissions from government or municipal operations,blueberry grow bag but are less useful for “community‐wide” assessments. In particular, the emissions from agriculture are often missing from existing inventory tools geared to local planners due to problems of complexity, data availability, boundary effects, and consistency with methods designed for larger spatial scales . Methods to estimate emissions from agriculture within a local inventory framework would be a valuable asset for those developing mitigation and adaptation strategies in rural communities. In this paper, a local inventory of agricultural GHG emissions in 1990 and 2008 is presented for Yolo County, California. Recent mitigation and adaptation initiatives in Yolo County thus provide the policy context for this analysis .

The main objectives of this inventory of agricultural emissions are to: prioritize voluntary mitigation strategies; examine the benefits and trade‐offs of local policies and on‐ farm practices to reduce agricultural emissions; and discuss how involving agricultural stakeholders in the planning process can strengthen mitigation efforts and lay the groundwork for future adaptation.In this study, an inventory of Yolo County’s agricultural GHG emissions was conducted for both the AB 32 base year and the present period . To address the wide range in data availability and analytical capacity that exists across different national or regional scales, the Intergovernmental Panel on Climate Change advocates a three‐tiered approach for identifying the appropriate inventory methods used for the agriculture sector . This tiered system refers to the complexity and geographic specificity of the inventory method in question; with the Tier 1 methods using a simplified default approach and relatively coarse activity data, while the Tier 3 methods involve more sophisticated models and higher resolution activity data . The Tier 1 methods used here have been adapted for local activity data from three main sources: the CARB Technical Support Document for the 1990–2004 California GHG Emissions Inventory ; 2) the U.S. EPA Emissions Inventory Improvement Program Guidelines ; and 3) the 2006 IPCC Guidelines for National GHG Inventories . Supplementary materials , provide detailed equations, activity data, and emissions factors for each emissions category . While strategies to adapt inventory methods to local data were exchanged with the Yolo County Planning Division during the preparation of their recent climate action plan, the present study is an independent assessment of agricultural GHG emissions.Direct N2O emissions were calculated using a Tier 1 approach that estimated nitrogen inputs from the following sources: synthetic N fertilizers, crop residues, urine deposited in pasture, and animal manure . In Yolo County, 16 crop categories accounted for approximately 90 percent of irrigated cropland. The harvested area of each crop was taken from the county crop reports for 1990 and 2008 . To calculate the total amount of synthetic N applied in Yolo County, the recommended N rate for each crop was multiplied by its cropping area and then summed across all crop categories. For a given inventory year, the recommended N rate for each crop was obtained from archived cost and return studies published by the University of California Cooperative Extension . Nitrogen inputs from crop residues for alfalfa, corn, rice, wheat, and miscellaneous grains were calculated using crop production data taken from the county crop reports . Nitrogen excreted by livestock in the form of urine or manure was calculated for the six main livestock groups assuming year‐round production. Emissions from poultry were not calculated, since no large‐scale poultry operations exist in the county . Dairy cattle numbers for both inventory years were taken from the National Agricultural Statistics Service database , while all other livestock numbers were obtained from the county records . Dairy cattle and swine manure were assumed to be stored temporarily in anaerobic lagoons and then spread on fields. All other livestock categories were assumed to deposit their urine in pastures. Indirect N2O emissions were estimated based on the total amounts of N added as synthetic N fertilizer, urine, and manure; and calculated using standard values for the volatilization and leaching rates, and default emission factors .A Tier 1 approach was developed to calculate fuel consumption from mobile farm equipment. Each crop’s annual harvested area was multiplied by its average diesel fuel use per hectare from archived cost and return studies and then summed across all crop categories to determine the total amount of diesel fuel used each year . The amount of CO2, N2O, and methane emitted was determined by multiplying the total amount of diesel fuel consumed by mobile farm equipment by emission factors for each gas . The Tier 1 estimate of emissions from mobile farm equipment was then compared with results generated by the Yolo County Planning Division who used Tier 3 OFFROAD emissions model . The OFFROAD model estimates end‐use fuel consumption based on detailed information collected on equipment population, activity patterns, and emissions factors . A detailed summary of the OFFROAD model framework and activity data specifications is available from CARB .

All land-use systems showed much higher C mineralization rates in the topsoil than subsoil horizons

Andisols dominated by allophanic materials generally contain low KCl-extractable Al concentrations; however, these values may be underestimated due to “induced hydrolysis” of displaced Al and subsequent adsorption of polymeric Al to allophanic materials . The elevated pH associated with the horticultural soils reduced the exchangeable Al3+ concentrations to non-detectable levels , further reducing the potential for Al3+ toxicity. A notable findings in this study was the increase in soil pH and base saturation following land use changes as revealed by the strongly positive correlation between soil pH and exchangeable cations . The high base saturation under horticultural land uses as compared to < 23% for the pine forest and tea plantation soils is associated with the presence of soluble salts derived from lime, horse manure and Kfertilizer application. These soluble salts derived from the agricultural amendments are beneficial to soil fertility as they can be readily taken up by roots to meet plant nutrient requirements. Conversion of pine forest to intensive horticultural crops resulted in the increase of nitrate content by 4–7 fold . This high concentration of nitrate is explained in part by mineralization of horse manure and urea applications and the presence of positive charge on surfaces of nanocrystalline materials to retain anions. According to Auxtero et al. the positive charge of subsurface allophane-rich horizons allowed Andisols to retain mobile anions such as nitrate, which is beneficial for crops. Further,grow bag gardening the higher pH values of the IH soil may contribute to more favorable conditions for nitrification leading to the lower NH4 + and higher NO3 – concentrations found in the profile.

Similar results were reported following forest timber harvest where soil NO3 – increased up to 8-fold shortly after harvest as compared to pre-harvest conditions . Previous studies measured anion exchange capacity of allophane-rich soils ranging from 0.4 to 12.2 cmolc kg−1 . This range of AEC values corresponds to 56 to 1700 mg NO3-N kg−1 , which appears sufficient to accommodate the KCl-extractable NO3 – concentrations that range up to 35 mg NO3-N kg−1 in the IH profile. Evaluation of N content in Java Island, Indonesia with different soil types and land uses showed higher soil N content in Andisols was associated with the presence of nanocrystalline materials . Retention of NO3 – within the soil profile reduces nitrate leaching and provides a readily available N supply for deeply-rooted crops . Under pine forest vegetation , the soil P retention was consistently high throughout the entire pedon . In contrast, the IH land use receiving application of horse manure for the past7 years showed appreciably lowering P retention in the upper 40 cm. The decrease in P retention and the increase of available P in the upper horizons of the IH profile were related to application of horse manure and inorganic SP36 fertilizer . These P dynamics could be associated with competition between organic functional groups derived from the horse manure and the applied P for sorption to the hydroxyl functional groups of the allophanic materials. Organic matter functional groups may block some reactive functional groups on allophanic materials, which in turn reduce P retention. In addition, the increase in pH from 4.5 in the PF soil to pH 6.1 in the topsoil of the IH soil may contribute to reduced P retention. This is supported by negative correlation coefficient between P retention and soil pH .

Maximum phosphate sorption in Andisols often occurs in the pH range of 3.0–4.5 and decreases with increasing soil pH . Thus, the application of animal manure and lime appears to be an effective nutrient management strategy to enhance P availability in these high P fixing Andisols. Higher extractable S concentrations in the PF soil may be due to a combination of enhanced capture of H2S/H2SO4 emissions by the canopy of the pine forest, low S uptake by the pine forest and/or low soluble PO4 concentrations that could displace sorbed SO4. The depth trend for extractable SO4-S consisted of lower concentrations in the topsoil than the subsoil for pine forest and horticultural land uses. This is related to competition with P and organic matter and with the increase in soil pH for horticultural crops . Previous workers have reported that sulfate and phosphate compete for the same anion-binding sites but P is adsorbed stronger than sulfate due to phosphate ions being able to form very strong inner-sphere complexes . In contrast, sulfate forms weaker inner-sphere and outersphere SO4 sorption complexes on short-range ordered materials, with the former becoming more dominant with decreasing pH and increasing sulfate concentrations . Pigna and Violante reported phosphate sorption 2–5 times greater than sulfate in Andisols and by increasing pH, phosphate sorption slightly decreased, whereas sulfate retention decreased dramatically . In addition organic matter competes more effectively with sulfate than with phosphate for sorption sites , resulting in low S availability in the topsoil horizons with high organic matter in the present study. Micro-nutrient availability is typically greater in more acidic soils due to higher metal solubility. In the present study, however, the micro-nutrient availability was higher in the horticultural soils having a higher pH . In particular, the addition of horse manure appears to provide both a source of micro-nutrients as well as high dissolved and particulate organic matter concentrations to enhance metal solubility by complexation.

Therefore, manure additions appear to provide a strong benefit with respect to micro-nutrient availability for agronomic crops.The Andisols in this study contained much higher C stocks to a depth of 1 m as compared to the global average for tropical Oxisols and Ultisols of 9.7 and 8.3 kg m−2 , respectively . Further comparison to Oxisols and Ultisols from the Brazilian Amazon had C stocks from 8.5 to 10.5 kg m−2 , which were 2–3 time lower than the tropical Andisols in this study. These comparisons indicate that Andisols have substantially higher capacity than other mineral soils to preserve organic matter. These results are consistent with those of Torn et al. who concluded that Andisols contain about twice as much organic C per m2 than Oxisols or any other soil orders, except for Histosols and Gelisols. Oxisols and Ultisols are dominated by low activity clays that provide less active mineral surfaces for physical and chemical stabilization of soil organic C . In contrast, N stocks of our tropical Andisols were similar in magnitude to Oxisols and Ultisols in the Brazilian Amazon that varied from 0.71 to 2.3 kg N m−2 , but mostly from 0.7 to 1.3 kg N m−2 in the upper 100 cm . Therefore, the Andisols of this study appeared to store organic matter with a higher C/N ratio than Amazonian Oxisols and Ultisols. Overall, soil carbon and nitrogen stocks in the upper 1 m of soil profiles increased in agricultural soils compared to the pine forest soil . These data appear to suggest degradation of soil organic C and N in the topsoil following conversion to agriculture but compensation by the elevated C and N in sub-soils. This condition results from pedon redistribution of organic C concentrations from topsoil to subsoil horizons. This redistribution may be attributed to a decrease in surface litter under agricultural land use with deeper incorporation of organic matter by tillage, and/or deeper rooting system of some horticultural plants. Alternatively, the appreciably higher bulk densities of the agricultural soils contributed to higher organic C stocks compared to the pine forest soil suggesting a role for compaction in increasing C stocks on volumetric basis. Finally, it is possible that periodic volcanic ash deposits have resulted in burial of organic-rich horizons,plastic grow bag leading to the high organic matter in subsurface horizons. Importantly, in spite of intensive agricultural production for > 100 years, there was no appreciable loss of organic matter from these soils as has been documented in many soils following conversion of forest vegetation to agricultural purposes. Similarly, Panichini et al. reported that disturbance of Andisols in Chili by forest management did not alter carbon storage. They posited that organic matter was stabilized by amorphous materials and organo-mineral complex formation, and the humid climate protected soils from irreversible drying and potential carbon loss. The ability of Andisols to strongly sequester and preserve organic C under various land-use/land management practices was demonstrated by the increase of organic matter in subsoil horizons of agricultural soils compared to the forest soil. In contrast, the lack of an organic matter build up in the topsoil and IH soil receiving horse manure for the last 7 years relative to the FH soil indicate that the added horse manure is quickly mineralized to provide nutrients to the horticultural crops. In addition, the increased N content from inorganic fertilizer may accelerate mineralization of organic C. On the other hand, the zero tillage in the FH soil contributed to the buildup and preservation of organic matter in the FH soil compared to intensive cultivation in the IH soil.

The strong correlation between organic C and Alp and the lack of a significant correlation between organic C and Sio suggest that Al-organic complexes are more important than allophane in preserving organic matter in these tropical Andisols. Microbial biomass C trends showed a positive relationship with total C and extractable DOC. The most evident change with respect to land use was the large decrease in MBC in the topsoil upon conversion from pine forest to agricultural production . Surprisingly, the lowest MBC values were found in the IH soil which received regular additions of horse manure for the past 7 years. Extractable DOC is considered an important carbon source to the microbial community and often correlates with microbial biomass. Extractable DOC represented 1.2–1.6% of total soil organic C for the PF and TP compared to < 1% for the IH and FH soils. This suggests that changes in vegetation possibly resulted in changes to the chemical nature of the organic matter affecting DOC solubility, which may affect substrate availability for the microbial community. Overall, agricultural practices had a strong impact in reducing microbial biomass C in topsoil horizons as compared to the pine forest. The microbial-labile pool of organic C is revealed by C mineralization rates during the incubation period. The overall CO2 mineralization rates followed PF > TP > IH > FH in both topsoil and subsoilhorizons . This agrees well with the highest DOC concentrations found in the PF soil and indicates more easily decomposable organic C substrates were available in PF soil than agricultural land uses. Interestingly, CO2 emissions shifted to IH > PF > TP > FH after day 70 in the topsoil, indicating depletion of easily decomposable C in the PF and TP soils. The much lower C mineralization rates in the subsoil than topsoil horizons were accounted for in part by the higher amorphous material content in the former . Chevallier et al. measured transformation of organic matter in volcanic soils by CO2 respiration and showed that the decomposition decreased as the soil allophane content increased. The low C mineralization rates for the FH profile is likely due to depletion of the microbial labile C pool as new organic carbon inputs were minimal over the last 7 years due to fallowing of the soil. This suggests that the topsoil contains more labile C substrate than subsoil horizons. According to Kavdir et al. , the fresh litter contained labile and easily decomposed materials, which mainly consisted of O-alkyl C. Inputs of new organic matter will be preferentially incorporated into the topsoil horizons and organic matter in the subsoil horizons is likely more strongly stabilized by physical and chemical mechanisms. The formation of metal–humic complexes was shown by positive linear correlation between dissolved organic C with Al- and Fe- extracted by Na-pyrophosphate . Determinant coefficients for Al and Fe were 0.84 and 0.80, respectively, suggesting that about 80% of dissolved organic C was bonded to the short-range ordered materials. The fraction of soil organic C bonded to Al and Fe varied from 25 to 50% with the magnitude following TP > FH > PF > IH in the topsoil and middle portions of the profiles . In contrast, the organic carbon bonded to metals in the lower pedon followed: IH > TP ∼ PF > FH. Previous studies on mineral control of carbon pool in Andisols in the Réunion Island showed the largest proportion of organic matter occurred as organo-mineral complexes .

The Department of Climate Change and Meteorological releases a forecast for that season

Networks are defined as “nodes of individuals, groups, organizations, and related systems that tie in one or more types of interdependencies” . Interdependencies might include shared values, ideas, and information exchanges that are critical to the success of individual actors as well as the network as a whole. Within a social network exists a knowledge network with, “heterogeneously distributed repositories of knowledge and agents that search for, transmit, and create knowledge” . In the context of Malawi’s extension system, a difference in worldviews, lack of coordination, and diversity of messages have remained challenges in providing effective information to farmers . The disconnect of stakeholders cited by Masangano, Kambewa, Bosscher, and Fatch promotes misconceptions and misinformation to farmers by extension providers and affects the quality of extension services throughout the extension system. Therefore, it was necessary to evaluate the structure of organizations providing extension services, engagement amongst stakeholders operating within the network, and transfer of knowledge within the extension system. In fact, an understanding of contemporary agricultural knowledge networks, “highlights the importance of networks of actors who cooperatively work together to deliver relevant knowledge to the right people at the right time and place” . The ability of extension providers in Malawi to communicate consistent messages to farmers was not only dependent on their access to resources, but also the strength of social ties within the network itself. Using social network analysis allowed these networks, social ties,flower bucket and information transfer to be analyzed. Relationships or ties within the network were be evaluated by understanding the direction of ties and measures of centrality which represent the importance of actors relative to one another.

Several types of centrality measures have been identified by Wasserman and Faust and are essential in evaluating the importance of different actors within the network. The first type of centrality measure is degree centrality and concerns the number of ties directly related to an actor or organization of interest. This measure is also differentiated by the number of ties coming to an actor and the number of ties leaving an actor . The second type of centrality measure is betweenness centrality and refers to the number of times an actor is situated between two other actors. This measure captures which actors hold the network together, where key paths of communication exist, and where network breaks could occur. The third type of centrality measure is closeness centrality and relates to the shortest distance between actors relative to a certain starting point. An actor with low closeness centrality must pass through many intermediaries to reach other actors within the network. Bodin and Prell also describe the importance of evaluating the cohesion of the whole network through a measure of network density. Network density is the proportion of ties that exit throughout the whole network and reveals the level of connectedness or cohesion present in the network. Cohesion within the network describes the extent to which the network is interlinked and united. Thus, this information was important in understanding the stakeholder connections within Malawi’s extension system.The private sector organizations are involved in activities to develop structured markets for farmer’s products, provide inputs for crop production processes such as fertilizers and pesticides, and facilitate farmer trainings focused on specific value chains and commodities. The main clientele for private sector participants includes smallholder farmers and public sector actors whose employers pay private companies to learn about specific topics. One private sector participant explained, “we focus on closing the finance gap affecting most small-scale farmers who are forced to sell their produce at harvest because they need money to re-pay the cost of inputs and prepare for the next season.”

The annual number of farmers reached by private sector actors ranges from 350 for a small farmer training company to 5,000 for a large private input supplier. The organizational structure of the larger company is fairly hierarchical with field staff, subject matter experts, and company heads. The organizational structure of the two smaller companies is similar to a cooperative where each employee holds multiple positions and is also a farmer themselves. Although not directly asked during interviews, five participants mentioned having advanced degrees and the majority of participants hold high-level positions within their organizations as Directors, Managers, Subject Matter Experts, Specialists or Team Leaders.Participants from international NGOs are involved in a wide range of activities focused around improving food security, providing emergency response during disasters, supporting national health and nutrition outcomes, and building capacity of local communities to sustainably grow food and improve rural livelihoods. A common word used by international NGO participants to describe their organization’s activities was “resilience.” One participant noted, “we’re trying to build resilience with these farmers. We identify farmers, and then come up with interventions that will build their resilience.” The main clients for international NGO participants are smallholder farmers who participate in agricultural interventions, public sector actors who receive funding for extension activities, and research institutions who receive support for technological innovations. According to interview participants, the annual number of clients served by international NGOs ranged from 5,0000 – 148,000 depending on the number of projects implemented in Malawi. The organizational structure of all participating international NGOs is fairly similar and includes Extension Staff with specific expertise, Project Managers in Malawi, Program Managers located internationally, and international Program Directors overseeing programs in multiple countries. Participants from farmer organizations are involved in activities including agribusiness and marketing, agricultural development and crop production, and the improvement of farmer livelihoods. As one participant noted, “we try to assist these farmers and make their farming a business.”

Participants also explained how they advocate for farmers on a local and national level through proposed policy changes in the National Assembly . Organizations that support farmers serve between 7,000 – 1,000,000 farmers each year. The largest of the farmer organizations operates with a clearly defined organizational structure that begins with individual farmers. Around 10-15 farmers come together to form a Club, several Clubs form Group Action Committees, Group Action Committees come together to form Farmer Associations, and select farmers from the Associations for the Executive Committees of the Association. A Board of Directors manages each Association and the National Farmer Organization headquarters provides support and management of each of the 54 associations in Malawi . Malawi NGOs engage in activities geared towards customizing and disseminating agricultural messages from the public sector or international NGOs to farmers throughout the country. One participant described their organization as “knowledge brokers,” noting that they did not develop content, but customized and tailored messages to fit the needs of specific farmers. The NGOs explained their ability to reach large numbers of farming households through ICTs such as radio and served between 8,000 – 2,000,000 farmers annually. Participants from the local NGOs noted the multitude of positions they and their colleagues hold within their organizations. One participant commented, “I’m the manager of the organization, but I’m also doubling as the Field Officer,square flower bucket which means I have a big job to do. Sometimes it becomes a big challenge for me to fulfill all my duties at once.”Finally, government representatives engage in a wide variety of activities including supporting extension services in livestock, crop production, environmental affairs, fisheries, and irrigation, disseminating agricultural messages to farmers, and supporting rural livelihoods through capacity-building efforts. Two common phrases mentioned by government participants in their explanation of program activities were “climate advisory services” and “nutrition sensitive agriculture.” These participants noted how their organizations strive to incorporate both cross-cutting themes into the interventions they implement with farmers. The number of farmers served by the governmental organizations ranged from 24,000 in a single section to 4,200,000 at DAES. The country is divided into what we call agriculture development divisions . We have eight ADDs and those areas are divided based on the agro-ecological zone. One ADD covers multiple districts with similar agricultural practices that are done there. Below the ADDs we have twenty-eight districts, but we have actually 31 District Agricultural Development Offices because some districts are large and split in two or three offices. Below the DADOs, we have what we called Extension Planning Areas and we have 204 EPA’s. Below the Extension Planning Areas, we have sections. This is the smallest unit. The sections are where we have the agricultural extension officers on the front lines who interface with farmers. Messages are developed by the MoAIWD and then disseminated to the DAES through the extension system described above. One of the most important research questions posed in this study was, “how is information generated in Malawi’s extension system to address climate change?”

In order to answer this question, I sought to understand which organizations develop content and what is the process for generating and improving messages that are disseminated throughout the extension system to address climate change. A total of 85 organizations from international NGOs, Malawi NGOs, private industries, farmer groups, government agencies, and research institutions were referenced by participants. The number and types of organizations that were referenced by participants during interviews is shown in Table 3. One direction relationships were described by participants with the size of the node indicating the level of betweenness with other organizations in the network and are shown in Figure 6. The hierarchical development of content from a few organizations operating within Malawi’s extension network is also illustrated in Figure 6. Seven out of the top ten content developers referenced by participants are government organizations, one is from the U.S. government, one is an international research institution, and one is a Malawi NGO. It is also evident that the majority of Malawi’s extension providers were not referenced by participants as content developers and therefore do not have any directional arrows present. Measures of centrality for the top ten organizations developing content to address climate change in Malawi’s extension system are seen in Table 4. The two organizations with the highest in-degree scores are DAES and DMCCS . Participants noted that they rely on these organization to develop messages that are then customized before the information is disseminated to farmers. Participants explained that DAES provides technical agricultural messages to extension providers, while DMCCS develops and shares information regarding national and local weather conditions. Several participants noted that agricultural content originates from partnerships and information sharing between DAES and other organization such as DARS, CGIARs, and other MoAIWD departments. A representative from DAES explained, “the technologies that come to us normally come from the research institutions like the CGIARS with leadership from the Department of Agricultural Research in Malawi. Our function is then to take the different technologies generated by research and improve them.” Although DAES has the highest in-degree score, DAES staff noted that the technologies they share throughout the extension system originate from research organizations outside Malawi and research departments within the country. Instead of developing the technologies, the role of DAES is to customize and tailor messages about agricultural technologies to meet the needs of specific audiences and communities. Additionally, although MoAIWD has a high in-degree score of 5, several participants noted that technical messages are typically developed through MoAWID’s technical departments before being presented to top officials within the ministry. Additionally, it should be noted that several participants indicated that they did not know which departments within MoAIWD develop climate adaptation messages for farmers. DMCCS has the second highest in-degree score and was commonly referenced as a content developer by participants. One participant shared the type of information provided by DMCCS commenting: Climate information is provided to farmers at the beginning of each growing season. Participants explained that DMCCS staff analyze seasonal, monthly, weekly, and daily weather forecasts and share that information with farmers and extension providers. Information about weather conditions is either disseminated directly to farmers through mass media like ICTs or by extension providers who deliver messages to a specific locality and offer support to farmers to prepare for the growing conditions of a particular season. Although not as commonly referenced, several participants also mentioned weather content being developed and disseminated by DoDMA. The U.S government funded project, SANE also has a high in-degree score of 4.

Food-aid policies have been introduced to complement farm legislation

It was suggested that heterogeneity is a major cause for the gradual processes of diffusion of new technology in agriculture. Farm size differences were found to be the major explanatory variable for differences in the tendency to adopt “lumpy” technology such as tractors and computers . Other dimensions of heterogeneity among individuals that were found to substantially affect technological choices include education, age, information, and risk preferences. The surveys present evidence that differences in physical features such as weather and land guality and infrastructure were responsible for differences In adoption patterns between regions. I-Ieterogeneity of the farm population is reflected by the partial participation in many government commodity programs. Rausser, Zilbennan, and Just demonstrated that high-quality lands are more likely to be utilized by participants in diversion programs who tend to divert low quality lands. Calvin found that size, financial situation, and productive capacity affect program participation choices. Heterogeneity and variability have to be incorporated into policy modeling for several reasons. First, the use of aggregate relationships, which assumes average behavior to be representative, may lead to very erroneous policy predictors. For example, an analysis of the impact of pollution regulations with a representative farm is likely to conclude that the introduction of a pollution tax is likely to reduce both total output and the pollution output ratio. Hochman and Zilberman showed that, for the case of a polluting industry where more cost-effective, modern producers are also more pollution intensive ,procona valencia a pollution tax tends to reduce total pollution but may increase the pollution/output ratio since it may cause the older, least cost-effective, and polluting producers to stop operations.

They also showed that, with heterogeneity, a tax may attain a regional pollution target at least cost but a standard may attain it with more output and cause smaller increases in price. Second, recognition of heterogeneity is essential for analyzing inter group equity efforts. There have been much concern about the relative inputs of agricultural policies on well-being in terms of different sizes and the distribution of income within agriculture and the structure of agricultural industries . Third, impacts of regulation may vary drastically across regions and must be spelled out for distributional analysis. Table 2, taken from Lichtenberg, Zilberman, and Harper , demonstrates the differences in regional \Velfare effects associated with government regulation using parameters of the cotton industry at the late 1970s. The table presents the relative welfare impacts of regulations that increase producer costs by 1 percent. Cotton producers are divided to four regions , and impacts of policies affecting each of these regions are estimated for all producer groups, consumers, and society as a whole. The analysis recognizes export demand for the product, and impacts of the policies on export revenues are also considered. It shows that the overall effects of a I-percent increase in cost in any of the regions are quite small. However, increases in cost in three of the regions are likely to increase overall U. S. wei fare due to increases in price and export revenues. On the other hand, an increase in California’s cost will reduce overall welfare because of the low supply elasticity of producers in that state. A I-percent increase in cost across the board will have a substantial effect on the overall domestic welfare and reduce consumer welfare. It will reduce the welfare of growers at the Southeast and Delta, with high cost and elastic supply, and increase the welfare of California and Plains producers who have lower cost and inelastic supply. Relatively, the distributional impacts reflecting heterogeneity among producers in this example are much larger than the overall efficiency effects.

Agricultural economists, such as Schultz and Cochrane , have realized that some salient features of the agricultural sector led to an “oversupply” trap-namely, situations where rates of return in the agricultural sector are far below the rest of the economy and the income of the rural sector does not keep up with the economy as a whole. The inelastic nature of the demand to agricultural products, the constant development of new agricultural technologies and product varieties, and the “rigid” nature of agricultural assets and inputs are among the causes of this oversupply problem. In recent years, however, it became quite clear that government policies aimed at addressing the “agricultural oversupply problem” made the situation worse. Government price support and inventory management programs actually contributed to increased production and inventory accumulation. Income-support schemes, such as diversion payment and even some set-aside programs, are likely to be contributors to oversupply and, through the resulting low prices, cause the need for further subsidization of the farm sector. These . policies did not decouple income support levels from the actual production levels. They presented incentives to farmers to overinvest and overproduce. A recent study of Just, Lichtenberg, and Zilberman demonstrates empirically that deficiency payments contributed substantially to extensive introduction of center-pivot irrigation and overexploitation of resources in the Midwest. Rausser, Zilberman, and Just argued that, because of land quality heterogeneity, farmers tend to set aside lower quality lands, and that action serves to increase per-acre yield after diversion. High and secure target prices tend to encourage use of variable inputs above what is suggested by market prices, and that serves as another source of increased supply. Finally, it has been recognized that the use of past performance as a base for payment has made target prices serve as a “de facto” price with respect to long term decisions which have contributed to upward bias in agricultural supply.

Thus, an important requirement from new policy regimes in agriculture is that it will not contribute to the oversupply problem but, rather, will mitigate it. Another major concern, closely related to the “oversupply” problem, is the instability of agricultural production levels and prices. The “oversupply” problem, caused by inherent properties of the agricultural sector and modified by government policies, has resulted in agricultural prices and returns that, on the average, are too low. However, prices and product availability have had substantial fluctuations. and the extensive economic literature on stabilization demonstrated that these fluctuations have been sources of much welfare loss. The instability and randomness of agriculture mentioned above have been major contributors to price instability, but other factors have also been sources of instability. Moreover, the inelastic nature of demands of agricultural products has magnified the fluctuation in prices in response to variations in supply. Government has constantly attempted to reduce the variability of agricultural prices through inventory control policies. These policies have been very costly because they have led to rapid accumulations of grain stocks which are expensive to carry and have also led to inventory reduction expenditures such as those associated with the payment-in-kind program of 1983. Research on the economics of stabilization has indicated some of the pitfalls associated with programs aimed at stabilizing prices. They argued that public inventory control activities, in part, served to replace private storage activities but, in essence, were a form of income transfer to producers. They also argued that public inventory support programs may reduce economic welfare by their tendency to lead to excessive srock accumulation. Moreover,flower bucket it seems that some factors that contributed to the oversupply problem also resulted in the excessive inventory problem-including some of the income-support policies of the past. Thus, a policy reform aimed at addressing the oversupply problem should reduce the tendency to excessively accumulate inventories while containing agricultural prices within a reasonable range. The low pnces and returns for producers in the farm sector, representing excessive productive capacity and requiring increasing government supports, have been major issues of concern and the reasons for government policies. The sustainability and the environmental consequence of agricultural activities have also become subjects of much concern. A major cause for the excessive supply and production in agriculture is the I introduction and intensive use of modern inputs such as chemical fertilizers and pesticides. Many of the inputs used by agriculture are exhaustible resources; they include water stocks and quality, top soils, and vulnerability to pesticides . Continuous depletion of these agricultural resources risks the sustainability of existing production levels-not to mention the ability to increase production in the long run. Moreover, the use of modern inputs in agriculture has resulted in substantial j externality costs.

Agricultural chemicals are major contaminants of bodies of water reducing productivity of many fisheries and risking the health of consumers. For example, the use of DBCP in California has resulted in a substantial cost of providing safe drinking water . While it is difficult to quantify the costs of groundwater contaminations by agriculture, a partial estimation of these costs done by Christensen and Ribaudo showed it to be higher than $2.5 billion annually. Thus, the “flip side” of the “excessive supply” problem is the excessive depletion of agricultural resources and negative externalities imposed by agriculture. Note that a reduction in production levels may alleviate both problems, and policies addressing one problem may also serve to address the other problem. Another issue of concern is maintenance of the competitive structure of agriculture and the traditional life-style viability of rural communities. This concern is of much importance in Europe where countries such as France and Germany have made substantial efforts in preserving their rural communities and life style. Technological changes, combined with the rise in labor cost and reduction in food prices, tend to increase the size of viable agricultural operations and may result in a structural change in the average farm size and a substantial reduction in the number of farms. This process endangers the survival of many “family farms” and preservation of the rural sector as we know it. The government is pressured to step in and to slow down this process and mitigate its impacts. Equity and distributional considerations playa crucial role in policy design. As Pe1tzman argued, the distributional effects of a policy reform plan determine its political palatability: Therefore, policy analysis and design efforts have to estimate the distributional implications of a proposed policy and suggest transfer and compensation arrangements that will assure the policymaker political support. One has to distinguish between intersectoral and intrasectoral considerations and address both in policy analysis. One manifestation of intersectoral heterogeneity is farm size distribution. Regional heterogeneity is another source of concern, and regional considerations are especially important in determining the political response to agricultural policy reform. Furthermore, the intraregional impacts of certain policies may be larger in relative terms than the other efficiency effects. Regional impacts should be assessed in the design of policy reform, and regional considerations should have a high priority in the design of compensation schemes needed to politically facilitate welfare-improving policies. Intrasectoral effects include impacts of policies aimed at one agricultural commodity on economic welfare included with the production and consumption of other products. The first type of impacts includes assessment of, say, sugar import quotas or corn production or impacts of policies affecting the supply and price of corn on the livestock sectors. Agricultural policies have been viewed over the last 30 years as part of a food policy that aimed at providing sufficient and affordable food to the U.S. population. Policy reform should explicitly address impacts of policies on consumers’ welfare, especially welfare of the poor, and introduce mechanisms to address these issues. Finally, a major impetus for the design of agricultural policy reforms is the heavy burden that the finance of agricultural programs imposes on government. Implied government expenditures should be a key criterion for assessment of any new policy design. The changes and problems of the agricultural sector dictate several key objectives that a comprehensive policy may need to meet. These objectives are to secure food supplies at reasonable prices; to prevent hunger and assure adequate nutritional intake for critical population groups ; to assure stable and fair returns and income to farmers and the rural sector; to control depletion of agricultural natural resources and work toward a sustainable agricultural system; to maintain environmental quality and control the negative environmental side effects of agricultural production; to protect the health and safety of farmers, farm workers, and consumers; to preserve the integrity of the rural sector and protect the viability of “family farms” and the competitive nature of agricultural industries; to obtain efficiency in resource allocation and production patterns; to promote innovation and flexibility in agriculture and food production; and to reduce the burden imposed on government financing of agricultural and food programs and policies. Similar objectives were presented by Brandow and Cochrane .

Control plants were watered to soil saturation with nutrient solution every day

To the best of our knowledge, the response of plants with decreased root exodermal suberin levels to water limitation has never been investigated. The importance of plant radial and cellular anatomy has also long been known as critical to our understanding of the role of plant roots in water uptake in the face of water deficit. Therefore, our findings provide direct evidence, via genetic perturbation, for the role of suberin in a specific cell type mediating tomato’s adaptive response to water deficit. Further, they impart a model by which exodermal suberin barriers contribute to whole-plant water relations in the absence of a suberized endodermis. While our findings are informative about the importance of suberin in the maintenance of transpiration and stomatal conductance under soil water deficit, our conclusions are limited to a particular stage of plant growth. Changes in response to water limitation in the field, particularly with genotypes with modified suberin that impart better maintenance of water potential, remains to be investigated. Suberin in plants roots has recently been proposed to be an avenue to combat climate change including via sequestration of atmospheric CO2 as well as conferring drought tolerance. This study provides evidence that root suberin is necessary for tomato’s response to water-deficit conditions. Increasing suberin levels within the root exodermis and/or the endodermis may indeed serve as such an avenue. The constitutive production of exodermal suberin in the drought-tolerant and wild relative of tomato, S. pennellii ,30 litre plant pots bulk certainly provides a clue that maintenance of suberin in non-stressed and stressed conditions may result in such a benefit.

However, trade-offs of such an increase must also be considered. Increased suberin levels have been associated with pathogen tolerance, but can also serve as a barrier to interactions with commensal microorganisms and constrain nutrient uptake, plant growth or seed dormancy. Regardless, this complex process serves as an elegant example of how plant evolution has resulted in a gene regulatory network with the same parts but distinct spatial rewiring and contributions of the different genes. Collectively, this rewiring results in the distinct but precise spatiotemporal biosynthesis and deposition of this specialized polymer to perform the equivalent function of endodermal suberin in a plant’s response to the environment.Seedlings of SlCO2p:TRAP and AtPEPp:TRAP cv. M82 were transplanted into 15 cm × 15 cm × 24 cm pots with Turface Athletic Profile Field & Fairway clay substrate pre-wetted with a nutrient water solution . Plants were grown in a completely randomized design for 31 days in a growth chamber at 22 °C, 70% relative humidity, 16 h/8 h light/dark cycle and 150–200 mmol m−2 s−1 light intensity. For ‘well-watered’ conditions, we maintained substrate moisture at 40–50% soil water content. For water-deficit treatment, we withheld water from the plants for 10 days before harvest, and for waterlogged conditions, we submerged the pot until the root–shoot junction. We harvested the roots as close to relative noon as feasible by immersing the pot into cool water, massaging the root ball free, rinsing three times sequentially with water, dissecting the root tissues and flash-freezing with liquid nitrogen. We harvested the lateral roots and 1 cm root tips of adventitious roots.

Sequencing libraries of adventitious roots were generated for each line in control and waterlogging conditions, and from lateral roots in control, waterlogging and water deficit conditions in four biological replicates per genotype/treatment, except for SlCO2p:TRAP lateral roots in control conditions . Total RNA was isolated from these roots as previously described, and non-strand specific random primer-primed RNA-seq library construction was performed as originally described. RNA-seq libraries were pooled and sequenced with the Illumina HiSeq4000 .Seedlings were transferred to 0.5 l cones containing Turface pre-wetted with a nutrient water solution . All pots were weight adjusted and a small set of pots were dried so that the percentage of water in the soil could be calculated. Plants were then grown in a completely randomized design for 3 weeks in a growth chamber at 22 °C, 70% relative humidity, 16 h/8 h light/dark cycle and ~150 µmol m−2 s−1 light intensity, and watered to soil saturation every other day. At the end of the first week, vermiculite was added to limit water evaporation from the soil. After 3 weeks, plants of each line were randomly assigned into two treatment groups and exposed to different treatments for 10 days. Water-limited plants were exposed to water deficit by adjusting pot weights daily with nutrient solution until a target soil water content of 40–50% was obtained. On the day of harvesting, between 09:00 to 12:00, stomatal conductance and transpiration were measured on the abaxial surface of the terminal leaflet of the third leaf or the youngest fully expanded leaf using a LICOR-6400XT portable photosynthesis system. Light intensity was kept at 1,000 µmol m−2 s−1, with a constant air flow rate of 400 µmol s−1 and a reference CO2 concentration of 400 µmol CO2 mol−1 air. The third primary leaflet was collected for measuring relative water content using a modified version of a previously established protocol. Fresh leaves were cut with a scalpel leaving a 1-cm-long petiole and the total fresh weight was measured. Leaves were then placed in individual zipper-locked plastic bags containing 1 ml of deionized water, making sure that only the leaf petiole is immersed in the solution.

Bags were incubated at 4 °C. After 8 h, leaves were taken out of the bags, placed between two paper towels to absorb excess water and then weighed to determine the turgid weight . Each sample was then placed into a paper bag and dried in a 60 °C dry oven for 3–4 days. Dried samples were weighed , and relative water content was calculated as: RWC  =  × 100/. A section of the fourth leaf, containing the terminal and primary leaflets, was used to measure stem water potential using a pump-up pressure chamber . The root systems were harvested by immersing the cone into water, massaging the root ball free, rinsing and removing excess water with paper towels. The middle section of the root system was sectioned using a scalpel. Around 300 mg of the dissected root tissue were added to Ankon filter bags . Bags were transferred into a glass beaker,wholesale plant containers an excess of chloroform:methanol was added and extracted for 2 h. Fresh chloroform:methanol was replaced and the extraction was repeated overnight under gentle agitation . Fresh chloroform:methanol was added and samples further extracted for 2 h. The extraction was repeated overnight twice with fresh chloroform:methanol . Finally, samples were extracted with methanol for 2 h. Methanol was removed and bags were dried in a vacuum desiccator for 72 h. Suberin monomer analysis was performed in these samples as described below.Co-expression network modules were generated with the WGCNA . Libraries were quantile normalized and a soft threshold of 8 was used to create a scale-free network. A signed network was created choosing a soft thresholding power of 8, minModuleSize of 30, module detection sensitivity deepSplit of 2 and mergeCutHeight of 0.3. Genes with a consensus eigengene connectivity to their module eigengene of lower than 0.2 were removed from the module . Modules were correlated with upregulated genes in DCRi lines described previously.Seven days after sowing, 50–100 primary roots per sample of length ~3 cm from the root tip were cut and placed in a 35-mm-diameter dish containing a 70 µm cell strainer and 4.5 ml enzyme solution , 20 mM KCl, 10 mM CaCl2, 0.1% bovine serum albumin and 0.000194% mercaptoethanol. Cellulase Onozuka R10, Cellulase Onozuka RS and Macerozyme R10 were obtained from Yakoult Pharmaceutical. Pectolyase was obtained from Sigma-Aldrich . After digestion at 25 °C for 2 h at 85 r.p.m. on an orbital shaker with occasional stirring, the cell solution was filtered twice through 40 µm cell strainers and centrifuged for 5 min at 500g in a swinging bucket centrifuge with the acceleration set to minimal. Subsequently, the pellet was resuspended with 1 ml washing solution , 20 mM KCl, 10 mM CaCl2, 0.1% bovine serum albumin and 0.000194% mercaptoethanol and centrifuged for 3 min at 500g. The pellet was resuspended with 1 ml of washing solution and transferred to a 1.7 ml microcentrifuge tube. Samples were centrifuged for 3 min at 500 × g and resuspended to a final concentration of ~1,000 cells per µl. The protoplast suspension was then loaded onto microfluidic chips with v3 chemistry to capture 10,000 cells per sample.

Cells were barcoded with a Chromium Controller . Messenger RNA was reverse transcribed and Illumina libraries were constructed for sequencing with reagents from a 3’ Gene Expression v3 kit according to manufacturer instructions. Sequencing was performed with a NovaSeq 6000 .A trajectory analysis was run for the ground tissue cells after selecting and re-clustering the cell types annotated as exodermis and meristematic zone . Gene expression matrices, dimensionality reduction and clustering were imported into the dynverse wrapper from Seurat and a starting cell was decided within the meristematic zone cluster. Trajectory inference was run using the minimum spanning tree algorithm. The MST method and UMAP coordinates from Seurat were used as input for mclust. Predictive genes or genes that were differentially expressed along the trajectory, specific branches and milestones were identified and visualized with a heat map using dynfeature within the R package dynverse.For sections, roots were divided in 1-cm segments, embedded in 4% agarose and sliced in 120-µm sections using a vibratome. Sections were then incubated in FY088 for 1 h at room temperature in darkness, rinsed three times with water and counterstained with aniline blue for 1 h in darkness. Confocal laser scanning microscopy was performed on a Zeiss Observer Z1 confocal microscope with the ×20 objective and GFP filter . For whole roots, suberin was observed in 7-day-old S. lycopersicum wild-type or mutant seedlings. Whole roots were incubated in methanol for 3 days, changing the methanol daily. Once cleared, roots were incubated in fluorol yellow 088 for 1 h at room temperature in the dark, rinsed three times with methanol and counter stained with aniline blue for 1 h at room temerature in the dark. Roots were mounted and observed with the EVOS cell imaging system using the GFP filter . Root sections were also stained with basic fuchsin . 1 cm segments from the root tip were embedded in 3% agarose and sectioned at 150–200 µM using a vibratome . The sections were stained in Clearsee with basic fuchsin for 30 min and then washed two times and imaged with a Zeiss LSM700 confocal microscope with the ×20 objective; basic fuchsin: 550–561 nm excitation and 570–650 nm detection. Hairy roots of SlASFT transcriptional fusions were imaged with the same confocal and objective, but with excitation at 488 nm and emission at 493–550 nm for GFP, and excitation at 555 nm and emission at 560–800 nm for red fluorescent protein autofluorescence.An average of 80 mg fresh weight root tissue per biological replicate was washed and immediately placed in a 2:1 solution of chloroform:methanol. Subsequently, root samples were extracted in a Soxhlet extractor for 8 h, first with CHCl3, afterwards with methanol to remove all soluble lipids. The delipidated tissues were dried in a desiccator over silica gel and weighed. Suberin monomers were released using boron trifluoride in methanol at 70 °C overnight. Dotriacontane was added to each sample as an internal standard, saturated NaHCO3 was used to stop the transesterification reaction, and monomers were extracted with CHCl3. The CHCl3 fraction was washed with water and residual water removed using Na2SO4. The CHCl3 fraction was then concentrated down to ~50 µl and derivatized with N ,N-bis-trimethylsilyltrifluoroacetamide and pyridine at 70 °C for 40 min. Compounds were separated using gas chromatography and detected using a flame ionization detector as previously described. Compound identification was accomplished using an identical gas chromatography system paired with a mass spectroscopy selective detector . Compounds were identified by their characteristic fragmentation spectra pattern with reference to an internal library of common suberin monomers and the NIST database.Tomato roots were fixed in 2.5% glutaraldehyde solution in phosphate buffer for 1 h at room temperature and subsequently fixed in a fresh mixture of osmium tetroxide with 1.5% potassium ferrocyanide in PB buffer for 1 h. The samples were then washed twice in distilled water and dehydrated in acetone solution in a concentration gradient . This was followed by infiltration in LR White resin in a concentration gradient and finally polymerized for 48 h at 60 °C in an oven in atmospheric nitrogen. Ultrathin sections were cut transversely at 2, 5 and 8 mm from the root tip, the middle of the root and 1 mm below the hypocotyl–root junction using a Leica Ultracut UC7 , picked up on a copper slot grid 2 × 1 mm and coated with a polystyrene film .

There is no consensus about the best biomarker to assess human exposure to Mn

This study also highlights the importance of traditional management systems and of smallholder agriculture for the conservation of bee communities in transitioning tropical agroecosystems. Manganese is a naturally occurring element found in soil, food, and water. It is mined for use in metal industries, as a gasoline additive, and as an agricultural fungicide. Mn is an essential nutrient but at high doses it is neurotoxic and can result in a syndrome of neurologic deficits called manganism.There is a growing body of evidence that early life exposure to Mn, at much lower doses than those that cause manganism, may have detrimental effects on the developing organism. In school-aged children, lower cognitive scores have been associated with higher levels of Mn in water, in blood and in hair.Pregnancy and the first year of life are potentially vulnerable periods of exposure because Mn crosses the placenta during pregnancy, and young children have increased absorption efficiency and reduced excretion via bile compared to adults.Occupational studies have generally found no association between Mn inhalation exposure and urinary Mn concentrations.Blood Mn has been the most commonly used biomarker of exposure, but the short half-life of Mn in blood may miss periods of peak exposure and Mn is well regulated by homeostatic mechanisms in adults.Higher hair Mn levels have been observed in children living near environmental sources of Mn. However,plants in pots ideas hair is susceptible to exogenous contamination and methods used for cleaning hair samples prior to analysis may affect the accuracy of Mn measurement in hair.Mn levels in nails may be a valid biomarker of cumulative occupational Mn exposure 7–12 months earlier.

In a rodent study, Mn levels in nail clippings were strongly correlated with Mn levels in the brain.Available biomarkers have a limited ability to assess prenatal exposure to the fetus. Even maternal blood Mn levels measured during pregnancy do not accurately reflect exposure to the fetus as cord blood Mn concentrations are consistently much higher than concentrations in maternal delivery blood Mn.Measurement of Mn in deciduous teeth offers a promising biomarker to characterize prenatal and early postnatal Mn exposure. Mn is incorporated directly into developing dentin and current analytical techniques allow for detailed Mn measurements that can be related to specific time periods of neonatal development beginning in the second trimester of pregnancy for incisors and ending 10–11 months after birth for primary coronal dentin in molars.In this study, we analyzed Mn in prenatal dentin of shed teeth from children enrolled in the Center for the Health Assessment of Mothers and Children of Salinas study, a birth cohort of children living in the Salinas Valley. The fungicides maneb and mancozeb contain approximately 21% Mn by weight. Agricultural use of these Mn fungicides averages 160 000 kg per year in the Salinas Valley of California and more than 90% is used on lettuce.Our goal was to determine whether Mn levels in dentin during the entire prenatal period were related to environmental, occupational and dietary sources of Mn exposure. We evaluated the contribution to MnPN from nearby agricultural Mn fungicide use, soil type, estimated concentrations of Mn in ambient air, farm work by the mother or other members of the household, Mn levels in house dust samples, and estimated prenatal Mn intake from maternal diet and tap water consumption. Between September 1999 and November 2000, the CHAMACOS study enrolled 601 pregnant women from health clinics in the Salinas Valley primarily serving low-income families. Participants were eligible if they spoke English or Spanish and qualified for state funding of well-pregnancy care .

A total of 537 liveborns were followed to delivery, of which 353 participated in a visit when the child reached 7-years. We collected 324 teeth from 282 children. We analyzed 237 of these teeth for Mn that were free of obvious defects such as caries and extensive attrition. Analyses for this paper include children who provided a shed incisor with Mn levels measured in prenatal dentin . Written informed consent was obtained from all participants and all research was approved by the University of California, Berkeley Committee for the Protection of Human Subjects prior to commencement of the study. Mothers were interviewed twice during pregnancy and shortly after delivery. Trained bilingual bicultural interviewers obtained information on maternal age, country of birth, education level, and household poverty level. Information was also obtained regarding potential sources of Mn exposure including maternal farm work during pregnancy, number of farm workers in the home, number of farm workers that stored their clothes or shoes indoors and glasses per day of tap water consumed by the mother. We abstracted information on the mother’s hematocrit to hemoglobin ratio from prenatal medical records for a subset of participants to assess maternal iron status during pregnancy. We conducted a home inspection during pregnancy . We collected latitude and longitude coordinates using global positioning system units and evaluated housekeeping characteristics. We also collected house dust samples described in more detail elsewhere.Briefly, we collected dust from one square meter area of the residence using a high volume surface sampler which allows for the calculation of dust loading in grams per square meter of floor area to better characterize Mn in dust available for contact by children.We collected deciduous teeth beginning with the 7-year visit. Participants either mailed or brought in teeth as they were naturally exfoliated. The method for measuring Mn in human teeth has been described in detail elsewhere.Briefly, teeth are sectioned in a vertical plane, and microscopy is used to visualize the neonatal line and incremental markings in sectioned teeth samples. We determined the concentrations and spatial distribution of Mn using laser ablation inductively coupled plasma mass spectroscopy.

Levels of tooth Mn were characterized by normalizing to measured tooth calcium levels to provide a measure independent of variations in tooth mineral density. Values are the area under the curve for points measured during the second trimester and third trimesters separately,container size for blueberries and combined into a prenatal average value . The coefficient of variation for five teeth measured on three different days ranged from 4.5% to 9.5% indicating good reproducibility of 55Mn:43Ca dentin measurements. Of the 207 children with a tooth analyzed for Mn, 131 had dust samples collected from the maternal residence during pregnancy. We stored dust samples at −80 °C for approximately ten years before shipping them on dry ice for analysis. We passed the dust samples through a 150 μm sieve and digested them overnight in 7.5 N nitric acid. We quantified Mn concentrations in dust using inductively coupled plasma optical emission spectroscopy with a limit of detection of 0.1 μg Mn/g dust. We calculated Mn dust loading by multiplying the Mn concentration by the dust loading obtained by weighing the sieved dust sample and dividing by the area sampled. The California Department of Pesticide Regulation maintains the comprehensive California Pesticide Use Report system.Pesticide applicators are legally required to report the active ingredient, quantity applied, acres treated, crop treated, date and location to one square mile in area for all agricultural pesticide applications. We used geographic information system software to geocode residential locations using the latitude and longitude coordinates and to calculate kilograms of maneb and mancozeb reported in the PUR data for combinations of distance from the residence and trimester of pregnancy based on gestational age . We weighted fungicide use near homes based on the proportion of each square-mile Section that was within the buffer around a residence.To account for the potential downwind transport of fungicides from the application site, we obtained data from the five closest meteorological stations in the study area on wind direction to determine the percentage of time during each trimester that the wind blew from each of eight directions. We determined the direction of each section centroid relative to residences and weighted fungicide use in a Section by the percentage of time that the wind blew from that direction for each trimester. Since 90% of agricultural Mn fungicides are used on lettuce in the Salinas Valley, we used Monterey County crop maps for spring, summer, and fall of 199724 to estimate the acres of lettuce within 1, 3, and 5 km of residences during each trimester.

We linked the geocoded residential locations to the appropriate drinking water system using customer service area boundaries provided by local drinking water companies and the state of California.Public drinking water systems provide monitoring data on Mn concentrations sampled at water distribution points.However, Mn was not frequently detected in the study area during the pregnancy period for our cohort. Therefore, we used the average Mn concentration of all available samples from a water system to estimate long-term average concentrations of Mn in tap water. We estimated tap water consumption using questionnaire data on the number of glasses of tap water consumed per day . We multiplied consumption by the average Mn concentration to estimate the average daily Mn intake from tap water during each trimester. Mothers were interviewed about their dietary intake at the time of the second prenatal interview using a modified Spanish-language Block food frequency questionnaire specifically adapted for this study population.For each food item, frequency of consumption and usual portion size were assessed for the previous year. We estimated the mean Mn concentration for each food/beverage item and the daily Mn intake for each women using the average frequency and portion-size of each food and beverage reportedly consumed in a day in combination with food-specific Mn estimates from the total diet study data from 1991 to 2005.We also included Mn intake from dietary supplements. Because iron deficiency might increase Mn uptake, we also estimated daily iron intake using similar methods for use as a covariate in the models.For a subset of participants , we also had hematocrit to hemoglobin ratios as a measure of anemia. We estimated exposure to other potential sources of Mn, including soil type at the residence, estimated Mn concentration in outdoor air, and motor vehicle traffic. To account for variations in soil Mn concentrations, we linked each residence, based on latitude and longitude coordinates, to detailed soil maps.To account for exposure via air inhalation, we assigned residences to a 2000 census tract and linked them to estimated 2002 Mn concentrations in ambient air from U.S. EPA.We also estimated Mn emissions from vehicle traffic at each residence by calculating the traffic density using previously published methods that involve summing vehicle kilometers traveled for all major roads by the length of the road segments within 500 m of the residence.We used ANOVA for bivariate analysis of categorical predictor variables and the Spearman correlation coefficient to evaluate continuous predictor variables We identified potential explanatory variables for inclusion in multi-variable regression models that were associated with MnPN levels with p < 0.2. Mn tooth levels were skewed to the right and we natural log transformed the values to normalize the distributions for regression models. We used manual forward selection to derive final multi-variable linear regression models to determine which Mn exposure sources were significantly associated with Mn levels in dentin during the prenatal period. We also used backward elimination as an alternative method to identify significant predictor variables. We estimated the percentage change associated with each exposure source by exponentiating the regression coefficients, subtracting one and multiplying by 100. We evaluated outliers and reran models excluding one participant with a studentized t-score >3 that also had the lowest measured MnPN level . Our final models included one using data available for all children with MnPN measured in teeth and another for the subset that had both tooth and house dust Mn measurements . We evaluated model fit using residual plots, log likelihood tests and Aikake’s Information Criterion. We investigated nonlinear relationships between continuous predictor variables and tooth Mn levels using penalized splines with 3 degrees of freedom in general additive models. We used Moran’s Global I to assess residual spatial auto correlation of MnPN levels for the final models. We compared Mn levels in prenatal dentin from the second trimester to levels from the third trimester using a paired t test and ran separate regression models by trimester to evaluate significant predictors by trimester.

The errors of field measurements of evapotranspiration can propagate to the optimization

The rapid expansion of perennial crop acreage in the past two decades raises concerns about increasing and hardening water demand . As perennial crops have a sizeable initial investment cost, fallowing perennial crops during drought results in greater economic losses than fallowing annual crops. Therefore, GSAs with a high percentage of water use by perennials will likely experience challenges in implementing sustainability management during drought. To minimize economic loss during a severe multiyear drought, a large buffer between the sustainability threshold and actual water level should be maintained for those GSAs, that is, to prepare for a 5-year drought with conditions like 2014s, a groundwater buffer should be aimed with an approximate depth at least five times the values shown in Figure 13c, after accounting for surface water availability and total porosity of the aquifer. The NASA ECOSTRESS mission has an ongoing partnership with USDA, states of California, Florida, and Iowa, and many water districts on using remote sensing evapotranspiration to better inform water resources management . Anderson et al. also find that the improved temporal frequency of ECOSTRESS resulted in improved evapotranspiration estimates and captured peak growing season during which there were no Landsat acquisitions. The mission adopted both PT-JPL and DisALEXI to map evapotranspiration ,growing berries in containers which serves as the basis for the derive L4 products, such as Water Use Efficiency and Evaporative Stress Index . ECOSTRESS also operationally provides the Priestley-Taylor potential evapotranspiration product, which has been demonstrated to be useful for water management agencies for spatial estimates of reference evapotranspiration .

Our model evaluation work suggests that PT-JPL’s evapotranspiration estimates could potentially be further improved over irrigated croplands in agricultural regions with ample evapotranspiration measures over diverse crop types. On the other hand, our spatial estimates show that EToF, which is analogous to ECOSTRESS’s Evaporative Stress Index, varies by crop types and within the Central Valley for the same crop types. Users of the Evaporative Stress Index product over Central Valley should also account for the threshold of water stress dependencies and variability by crop types and other factors such as orchard age and salinity.Although overall our refined evapotranspiration estimation approach here has similar performance to that in more complex models such as DisALEXI , there are still a few factors that can cause errors in our estimates. PT-UCD is a single-source approach. We noticed an overestimation of rice net radiation when the field was flooded, probably due to the challenges posed by heat storage in the water column and the effect of a wet surface, and an underestimation of net radiation over the two AmeriFlux corn sites between every April and July. We also found a larger uncertainty in estimating actual Priestley-Taylor coefficients for corn during the dormant season between January and April. Although evapotranspiration during the nongrowing season accounts for a small fraction of total annual water use, explicit consideration of soil, and plant components of energy balance is expected to improve the accuracy of evapotranspiration estimates. The energy balance closure issue, for example, has been well recognized for the eddy covariance measurements . Baldocchi et al. and Eichelmann et al. , for examplee.g., conducted several analyses at AmeriFlux sites in the Sacramento-San Joaquin Delta and found a closure of 79.3% in an alfalfa site, that is, the ratio of the sum of sensible and latent heat flux over the residual between net radiation and ground heat flux and storage, and 71% in a rice site.

Their study suggests that incomplete storage calculation, rather than underestimation of eddy covariance measurements of fluxes, plays a major role in the lack of energy balance closure for their sites in the Sacramento-San Joaquin Delta. Therefore, we did not perform the correction for eddy covariance measurement sites from our data set. Additional uncertainties can be introduced due to the varying footprint size of the flux towers and the scale mismatch between 30 m ET and tower measurements. We compared the annual evapotranspiration value extracted from a single Landsat pixel collocated with measurement sites and the mean of the values within the surrounding larger areas . The differences in estimates between a 90 × 90 m window and the single pixel were small, with the largest difference of −3.2% occurred at site 18 in 2016. The differences generally increased with a larger footprint, depending on the heterogeneity of the areas, e.g., over a 510 × 510 m window at site 16 in 2016 had the largest mean relative difference of −10.3% from the corresponding center pixel value. Other studies also suggested that a rigorous footprint approximation is needed in future studies to make a fair comparison with field measurements . For example, Kljun et al. ‘s flux footprint model could be implemented at flux tower sites to determine the weight and extent of the pixel window. Compared to those driven by MODIS with daily revisiting , the evapotranspiration estimates derived from Landsat have the benefit of capturing finer spatial details, which is critical for water use assessment over a heterogeneous landscape. Landsat’s 16-days repeating cycle, however, can potentially lead to uncertainty in water use monitoring, especially during the rainy season or during the rapid plant growth and senescence stages. In this study, the uncertainty due to the temporal interpolation of the missing days was found minimal overall, likely because the rainy season coincides with dormancy or the very beginning or ending of the crop growth for the majority of crops in Central Valley due to its Mediterranean climate .

There were situations, for example, right after irrigation or right after harvesting for crops that undergo multiple harvests , when a relatively large error was introduced from the estimates interpolated from observations a few days apart. Future work is also needed to increase the temporal resolution of water use estimate by fuzing Sentinel 2 A&B satellite observations every 5 days with Landsat data. A sophisticated data fusion technique will also improve the accuracy of evapotranspiration monitoring and assessment, by taking advantage of complementary observations from multiple sensors . The robustness of our optimization approach partly relies on the availability of multiple field measurements for diverse crop types across the Central Valley. The automatic workflow developed here allows for a continuous improvement of the optimization accuracy, by taking advantages of the increasingly available crop evapotranspiration measurements with the increased deployment of both research-grade and commercialized surface renewal stations in the state . Although the Priestley-Taylor parameters in this study were tailored for California’s cropland, our data-model integration framework is generalizable to other regions. Once recalibrated and tested with local field data,blueberry container the PT-UCD approach can be applied to monitor daily evapotranspiration and assess water use at various scales over regions besides the Central Valley.Organophosphate pesticides are commonly used insecticides that inhibit acetylcholinesterase enzyme function and have been associated with poorer neurodevelopment in children . Children are particularly susceptible to the adverse impacts of pesticides and those living in agricultural areas may be exposed via multiple pathways, including diet, drinking water, residential use, drift from agricultural applications, and take-home exposures . Assessing exposure to OP pesticides is difficult due to their short biologic half lives and rapid excretion from the body . Dialkylphosphate metabolites, the most commonly used biomarker to characterize OP exposure in epidemiologic studies , have biological half-lives of less than 30 min to >24 h, depending on the parent OP and route of exposure . Measurements of metabolites or parent chemicals in 24-hr urine samples are considered the “gold standard” for assessing daily exposure to pesticides and other environmental chemicals that are excreted in urine . However, factors such as cost and participant burden make it difficult to collect 24-hr samples . While collection of spot urine samples is a convenient alternative, research suggests that analysis of biomarkers with short half-lives, including DAPs, in spot samples may result in exposure misclassification due to higher inter- and intra-individual variability . First morning void urine samples may reduce exposure misclassification, as they are more concentrated and reflect a longer period of accumulation . Few studies have assessed how well either random spot or FMV urine samples approximate internal pesticide dose estimated from 24-hr samples, information that is critical for risk assessment and pesticide regulation.

Estimating dose based on metabolite concentrations from spot samples also requires an accurate measure of urinary dilution and total daily urinary output volume . In adults, 24-hr urinary metabolite excretion has been estimated from spot urine samples by adjusting for creatinine excretion as an index of total daily urinary output volume . However, few studies have evaluated the validity of this approach in children. Due to likely differences in children’s urinary creatinine excretion from factors including age, sex, muscle mass, body mass index , diet, and fluid intake , adjusting for creatinine to estimate toxicant doses in children may introduce unknown sources of variability . Although not used as widely as creatinine correction, some evidence suggests that adjusting for specific gravity may be a more robust method to account for urinary output among children . The US Environmental Protection Agency is mandated by the 1996 Food Quality Protection Act to review and establish health-based standards for pesticide residues in foods and examine the cumulative health effects of exposure to mixtures of pesticides that share a common mechanism of toxicity, with prioritization of pesticides that may pose the greatest risk, such as OPs . The U.S. EPA has selected the Relative Potency Factor method to conduct hazard and dose-response assessments. RPFs are calculated as the ratio of the toxic potency of a given chemical, determined by the oral benchmark dose10 value based on a 10% brain cholinesterase inhibition, to that of an index chemical. Individual OP doses derived from index chemical toxicity equivalent doses can be summed to create cumulative OP dose equivalents . In this study, we measured DAP metabolites in spot and 24-hr void urine samples collected from 25 preschool-aged children over 7 consecutive days. The objective of this analysis was to evaluate the validity of using volume- and creatinine-adjusted FMV and non-FMV spot urine samples to estimate total 24-hr OP dose in children according to the 2006 US EPA Organophosphorus Cumulative Risk Assessment guidelines. The results of these analyses have implications for policy and risk assessments and could serve as a case study for other non-persistent toxicants measured in urine. Subject recruitment and procedures have been described previously . Briefly, we enrolled a convenience sample of 25 children recruited from clinics serving low-income families in the Salinas Valley, California. Eligible children were 3–6 years old, in good health with no history of diabetes or renal disease, toilet trained, and free of enuresis, and had English- or Spanish-speaking mothers who were ≥ 18 years old. Sampling occurred in March and April 2004. The University of California at Berkeley Committee for the Protection of Human Subjects approved all study procedures and parents provided written informed consent. Each family participated in the study over 7 consecutive days. On the first day, study staff measured the participating child’s height and weight, provided the supplies needed to collect urine samples, including specimen trays and jars, gloves, collection jars with blank labels, a small refrigerator, and two 24-hr sampling record forms, and instructed the parents and child on how to collect, record, and store samples. Urine voids were collected either directly into a collection jar or into a sterile pre-cleaned specimen tray placed over the toilet, which was then transferred by parents into the collection jar. Fig. 1 shows the timing of study activities. On spot-sampling days , families collected a single void at their convenience, recording the time of collection on the jar labels and identifying the sample as an FMV or non-FMV spot sample. On 24-hr sampling days , families were instructed to collect all urine voids from the 24-hr period as separate specimens, including the child’s FMV, all daytime and evening spot voids, and the FMV of the following day , if it occurred within the 24-hr sampling period. Participants were instructed to record the timing of all voids, including missed voids, on the 24-hr sampling record form. We limited the current analyses to samples collected on 24-hr sampling days .

The maps have a range of potential applications, from climate science to forest ecology

Accurate and high resolution albedo data is important for modeling surface melt water runoff on the ice sheet. Contributors to the project include UCLA doctoral students Matthew Cooper and Lincoln Pitcher, UCLA postdoctoral researcher Kang Yang, Rutgers University doctoral student Sasha Leidman and Aberystwyth University doctoral student Johnny Ryan .Researchers at UC Berkeley, including professor Sally Thompson’s group in the Department of Civil and Environmental Engineering, are using UAS as a novel thermal sensing platform. Working with robotics experts at the University of Nebraska, Lincoln, the team tested an unmanned system capable of lowering a temperature sensor into a water body to record temperature measurements throughout the column of water — which is useful in, for instance, identifying habitat zones for aquatic species. Initial field experiments that compared in situ temperature measurements with those made from the UAS platform indicate that UAS may support improved high-resolution 3-D thermal mapping of water bodies in a manageable time frame sufficient to resolve diurnal variations . More recent work has confirmed the viability of mapping thermal refugia for cold-water fish species from this platform.UC Berkeley professor Todd Dawson and his redwood science group are using UAS-mounted multi-spectral cameras to create 3-D maps of giant sequoias — trunks, branches and foliage — at higher resolution and with far less labor than was previously possible. The maps were developed through a partnership with Parrot Inc. The company builds the cameras and UAS used in the research,raspberries for containers and partners to manage the software, Pix4D, that was specially designed to analyze the images.

Knowing the total leaf area and above ground biomass of a tree and the structure of its canopy, for instance, allows researchers to calculate daily carbon dioxide and water uptake — important variables in assessing the interactions between trees, soil and atmosphere as the climate changes. A high-resolution map also yields information about a tree’s influences on its immediate environment — how much leaf litter falls to the forest floor, for instance, and to what degree shade from the canopy influences the microclimate around the tree, or the habitats in it. Another application: A precise map of a tree also provides a good estimate of how much carbon is stored in it as woody biomass. This information, in turn, can be combined with information from coarser methods of forest mapping, such as LIDAR, to improve estimates of the carbon stored in a large forested area. Mapping every tree in a forest at a high level of detail isn’t practical. But such maps of a sample of trees can provide good correlations between carbon mass and a variable like tree height, which LIDAR can measure to a high degree of accuracy — yielding a better estimate of the total amount of carbon in the forest.The forests of California are threatened by drought and disturbance. Bark beetle infestations in the state’s coniferous forests are a particularly large concern considering recent drought conditions, the threat of potential forest fires, and climate change. There is a need for both better methods for early detection of beetle infestation, and for visualization tools to help make the case for investments in suppression . High spatial resolution multi-spectral UAS imagery and 3-D data products have proven to be effective for monitoring spectral and structural dynamics of beetle-infested conifers across a variety of ecosystems . Sean Hogan of the UC ANR IGIS program is testing the use of machine learning algorithms applied to UAS imagery to efficiently classify early beetle infestations of ponderosa pines in California’s Sierra Nevada foothills.

Preliminary results indicate that even imagery from a basic GoPro RGB camera can be used to accurately detect bark beetle–induced stress in these trees. Over 34.1 million of California’s 101 million acres are classified as grazed rangelands . The cattle industry contributes significantly to the state’s economy, and the proper management of these rangelands is important for many reasons, including forage production, preservation of natural habitats and the maintenance of downstream water quality. High-quality, timely information on rangeland conditions can guide management decisions, such as when, where and how intensively to graze livestock. UAS enable high-resolution aerial imagery of rangelands to be collected at much greater speed and lower cost than was previously possible. Translating that imagery into information that is useful to range managers, however, remains a challenge. A UC ANR team — including GIS and remote sensing academic coordinator Sean Hogan, UC Davis–based rangeland and restoration specialists Leslie Roche, Elise Gornish and Kenneth Tate, assistant specialist Danny Eastburn and Yolo County livestock and natural resources advisor Morgan Doran — is working on this problem from several angles at research sites in Napa County’s Vaca Mountains, and in Lassen and Modoc counties. Every year, the U.S. government authorizes the U.S. Department of Agriculture to spend tens of billions of taxpayer dollars to support various agricultural and nutrition programs. Two in particular provoke both ire and unqualified support among elected representatives and other observers: the Food Stamp Program , which is operated by the Food and Nutrition Service , and the commodity support program, which is operated by the Farm Services Agency . This is partly due to the fact that the amounts spent are significant, but also because the potential impacts of these programs are questionable and extremely difficult to evaluate.

The Food Stamp Program is designed to augment the food budgets of qualified recipients, allowing them to purchase more food; the commodity support program ensures that commodity growers receive no less than a certain minimum price for their crops,even though market prices often fall significantly below that “price floor.” U.S. citizens and some permanent resident aliens are qualified to participate in the FSP if they meet the following criteria: a gross monthly income below 130% of the federal poverty level, and a net monthly income below 100% of the federal poverty level ; less than $2,000 in “countable resources,” such as a bank account; the ability to meet work requirements for able-bodied adults; and the ability to provide a Social Security number for all household members. In 2003, the USDA distributed a total of $21.4 billion in food stamp benefits to a monthly average of 9.2 million low-income households; each received an average of $195 per month . Although the food stamp program has been shown to marginally increase the quantity of food consumed by participants, a review of the dietary impacts of U.S. food assistance programs found that “there is no convincing body of evidence that [the FSP] improves the overall quality of the recipients’ diet, although there is some indication that it has increased the intake of some nutrients” . While the correlation between income level and fruit and vegetable intake has not been examined,blueberries in pots the proportion of consumers who eat at least five servings of fruits and vegetables daily is lower among black than white Americans; likewise, those with less than a high school education consume fewer servings than college graduates . Essentially all Americans, and not just food stamp recipients, would benefit from purchasing and consuming more healthful food products. Increasing the purchasing power of low-income Americans, however, is of particular importance due to the fact that calories are most cheaply available in the form of added fats and sugars, while nutrient dense foods are often significantly more expensive by comparison . Besides not improving participants’ dietary quality, the food stamp program also doesn’t serve those eligible to receive benefits particularly well: in 2003, only 61% of those eligible nationwide participated in the program, and in California only 39% of those eligible participated . Low participation rates represent, in the case of California alone, between $650 million and $1.49 billion in lost federal dollars annually . There are several explanations for these participation rates. Potential food stamp recipients often lack knowledge about eligibility criteria. In addition, the application process is notoriously difficult and dehumanizing, and the benefits are often perceived as not being worth the hassle. There is also persistent, and often well-founded, fear among immigrant communities that undocumented family members will be exposed to the U.S. Immigration and Naturalization Service by the application process for eligible individuals, such as U.S.- born children. California’s large immigrant community is an important factor contributing to the state’s low food stamp participation rate.Direct commodity support payments are subsidies paid directly by the USDA-FSA to growers of crops such as corn, wheat, cotton, soybeans and rice to offset low prices in the marketplace. These price supports do not in all likelihood significantly affect the retail price of food products, because only a small portion of that price is attributable to the cost of subsidized ingredients.

For example, the cost of high-fructose corn syrup in Coca-Cola or of corn in a box of Corn Chex represents only about 1% or less of the retail price. However, subsidies depress commodity market prices by raising production levels above demand. By keeping commodity prices artificially low, price supports also encourage the use of commodities in processed foods and as animal feed. Because subsidy payments are directly linked to farm production levels and total farm revenues, the program also encourages overproduction . The program is popular among large-scale commodity growers, who can receive millions of dollars each year, and legislators eager to show support for American farmers. It was therefore surprising to many that in early 2005 President Bush proposed placing a cap on commodity support payments of $250,000 per grower. With the recent defeat of the Grassley-Dorgan amendment in the Senate, which would have established a $250,000 cap on payments, whether that cap will be established will have to wait until the debate on the 2007 Farm Bill begins in earnest . Direct commodity payments are enormous and highly concentrated among the largest and most profitable growers. For example, $107.3 billion was paid out between 1995 and 2003, with 87% of the $11.5 billion spent in 2003 going to the top 20% of recipients . Agricultural production in California is skewed heavily toward specialty crops such as fruits, vegetables and nuts, which do not qualify to receive direct payments. As a result, fewer California growers are eligible to receive commodity subsidies. In 2003, close to 20% did — mostly growers of rice, cotton and wheat; they received roughly 6%, or $672 million, of the U.S. total commodity payments in a similarly concentrated fashion .The food stamp and commodity support programs illustrate that U.S. agricultural and nutrition policies are not specifically designed to promote health or good eating habits. A considerable proportion of commodity payments, for example, is directed to crops that are used primarily to produce calories in the form of added fats or sugars or as feed for livestock. What’s more, the bulk of these payments goes to very large growers of commodities that are overproduced to such an extent that subsidies are necessary to offset low market prices. Similarly, the food stamp program supplements the incomes of millions of low-income Americans so that they can afford to purchase an adequate amount of calories, but does very little to influence the nutritional quality of their diets. Unhealthful diets and inadequate fruit and vegetable intakes are the norm among most Americans, and diet-related chronic diseases such as diabetes, heart disease and obesity disproportionately affect low-income Americans. Making healthful foods more widely available and less expensive to consumers would help bring agriculture and nutrition policies into accord with public health goals, and would be good public policy . USDA Economic Research Service researchers recently highlighted the potential “unintended consequences” of policies to combat obesity — such as listing the number of calories on menus at fast-food restaurants or levying taxes on snack foods — and concluded that such policies would in all likelihood not cause consumers to choose healthier foods . These researchers also examined the relative importance of economic and behavioral factors in influencing fruit and vegetable choices . Research has demonstrated that cost significantly influences consumer food choices, especially among low-income consumers, and that retail price reductions are an effective method to increase the purchase of more healthful foods .There is no question that the food stamp and commodity support programs would distribute payments quite differently if the goals of both were explicitly to promote better eating habits among U.S. consumers. Increasing the level of benefits or expanding food stamp eligibility criteria is always a contentious and politically difficult issue.

There are significant opportunities in plant science research

The confluence of higher amounts of C and NO3 – moving into a reduced zone could be the reason that the matrix surrounding the preferential flow channel has higher denitrification rates, while the regions further away from the preferential flow channel have lower amounts of microbially available C and NO3 – . In contrast, residence times are too short in the channel to allow for reducing conditions to develop. The ability of the entire vadose zone to denitrify would depend on the overall surface area of preferential flow paths to the rest of the surrounding matrix in the zone of flooding. Overall, we find that low permeability zones alone or embedded within high flow zones demonstrate highest denitrification rates across all soil profiles.Because the ERT column more closely approximates the heterogenity of our agricultural field site, we use this column to demonstrate the impact of hydraulic loading and application frequency on nitrogen fate and dynamics. Simulated profiles of liquid saturation, NO3 – , NO3 – :Cland acetate for the simplified ERT stratigraphy for scenarios S2 and S3 are shown in Figure 9 and A3. It is interesting to note that AgMAR ponding under scenarios S2 and S3 resulted in fully saturated conditions to persist within the root zone only. In comparison, the 68 cm all-at-once application for scenario S1 resulted in fully saturated conditions to occur at even greater depths of 235 cm-bgs . This resulted in the NO3 – front moving deeper into the subsurface to depths of 450 cm-bgs under S1 compared to 150 cm-bgs for scenarios S2 and S3 . Much lower concentrations of NO3 – were found at 450 cm-bgs in scenarios S2 and S3 compared to S1 .

Thus,large pots plastic larger amounts of water applied all-at-once led to NO3 – being transported faster and deeper into the profile. Surprisingly, model results indicate 37% of NO3 – was denitrified with scenario S1, while 34% and 32% of NO3 – was denitrified in scenarios S2 and S3, respectively. For scenarios S2 and S3, denitrification was estimated to occur only within the root zone. This was confirmed by NO3 – :Clratio that did not show any reduction with depth for these scenarios. A reason for this could be that acetate was not estimated to occur below the root zone, preventing electron donors from reaching greater depths for denitrification to occur. In contrast, model results for S1 indicate that acetate was leached down to 235 cm-bgs below the limiting layer. Overall, model results indicate that NO3 – did not move as fast or as deep in scenarios S2 or S3; however, the ability of the vadose zone to denitrify was reduced when the hydraulic loading was decreased. The main reason for this was that breaking the application into smaller hydraulic loadings resulted in O2concentrations to recover to background atmospheric conditions faster than the larger all at-once application in scenario S1. In fact, the O2 concentration differed slightly between S2 and S3. Because O2 inhibits denitrification, we conclude that these conditions resulted in the different denitrification capacity across application frequency and duration. In summary, we find that larger amounts of water applied all-at-once increased the denitrification capacity of the vadose zone while incremental application of water did not. However, NO3 – movement to deeper depths was slower under S2 and S3.Because initial saturation conditions impact nitrogen leaching, we also simulated the impact of wetter antecedent moisture with 15% higher saturation levels than the base case simulation for the ERT profile. Simulated profiles of liquid saturation, NO3 – , NO3 – :Cland acetate for the simplified ERT stratigraphy under wetter conditions are shown in Figure 10. Model results demonstrate that the water front moved faster and deeper into the soil profile under initially wetter conditions for all three scenarios.

Within the shallow vadose zone , across AgMAR scenarios, O2 concentrations were similar initially, but began differing at early simulated times, with lower O2 under wetter antecedent moisture conditions than with the base-case simulation. In addition, both oxygen and nitrate concentrations showed significant spatial variation across the modeled column. Notably, nitrate concentrations were 166% higher in the preferential flow channel compared to the sandy loam matrix under wetter conditions, while only 161% difference was observed under the base case simulation . Nitrate movement followed a pattern similar to water flow, with NO3 – reaching greater depths with the wetter antecedent moisture conditions. Under S1, however, at 150 cm-bgs, NO3 – decreased more quickly under the wetter antecedent moisture conditions due to biochemical reduction of NO3 – , as evidenced by the decrease in NO3 – :Clratio, as well as by dilution of the incoming floodwater. In the wetter antecedent moisture conditions, 39%, 31%, and 30% of NO3 – was denitrified under S1, S2, and S3, respectively. For S1, where water was applied all at once, more denitrification occurred in the wetter antecedent moisture conditions, however, the same was not true of S2 and S3 where water applications were broken up over time. This could be due to the hysteresis effect of subsequent applications of water occurring at higher initial moisture contents, allowing the NO3 – to move faster and deeper into the profile without the longer residence times needed for denitrification to occur. Thus, wetter antecedent moisture conditions prime the system for increased denitrification capacity when water is applied all at once and sufficient reducing conditions are reached, however, this is counteracted by faster movement of NO3 – into the vadose zone. Simluations from our study demonstrate that low-permeability zones such as silt loams allow for reducing conditions to develop, thereby leading to higher denitrification in these sediments as compared to high permeability zones such as sandy loams. In fact, the homogenous silt loam profile reported the maximum amount of denitrification occurring across all five stratigraphic configurations .

Furthermore, the presence of a silt loam channel in a dominant sandy loam column increased the capacity of the column to denitrify by 2%. Conversely, adding a sandy loam channel into a silt loam matrix decreased the capacity of the column to denitrify by 2%. These relatively simple heterogeneities exemplify how hot spots in the vadose zone can have a small but accumulating effect on denitrification capacity . Note that differences in denitrification capacity maybe much greater than reported here because of increased complexity and heterogeneity of actual field sites when compared to our simplified modeling domains. Another observation of interest for silty loams is the prominence of chemolithoautotrophic reactions and Fe cycling observed in these sediments. In comparison, sandy loam sediments showed persistence and transport of NO3 – to greater depths. A reason for this is that oxygen concentration was much more dynamic in sandy loams, rebounding to oxic conditions more readily than in silt loams, even deep into the vadose zone . Dutta et al. found similar re-aeration patterns in a 1 m column experiment in a sand dominated soil,square planter pots with re-aeration occurring quickly once drying commenced. Even with the presence of a limiting layer, defined by lower pore gas velocities and higher carbon concentration, a sandy loam channel acted as a conduit of O2 into the deep vadose zone maintaining a relatively oxic state and thus decreasing the ability of the vadose zone to denitrify. In systems with higher DOC loadings to the subsurface, oxygen consumption may proceed at higher rates creating sub-oxic conditions in the recharge water and more readily create reducing conditions favorable to denitrification in the subsurface . We note here that microbial growth, which was not modeled in this study, could also affect the rates of O2 consumption and re-aeration, which could lead to underestimation of O2 consumption . Overall, denitrification capacity across different lithologies was shown to depend on the tight coupling between transport, biotic reactions as well as the cycling of Fe and S through chemolithoautotrophic pathways. Under large hydraulic loadings , overall denitrification was estimated to be the greatest as compared to the lower hydraulic loading scenarios . The main reason for the higher denitrification capacity was the significant decline in O2 concentration estimated for this scenario, whereas such conditions could not be maintained below one meter with lower hydraulic loadings under scenarios S2 and S3. However, nitrate was also transported deeper into the column under S1 as compared to S2 or S3. Tomasek et al. found the reverse in a floodplain setting, where intermittent indundation with flood water, comparable to our S2 and S3 contexts, resulted in higher rates of denitrification in the zone that was always inundated, due to priming of the microbial community and pulse releases of substrates and electron donors. Future studies examining the impact of AgMAR on denitrification should include processes such as mineralization to see if the same behavior would be observed. It seems that there may exist a threshold hydraulic loading and frequency of application that could result in anoxic conditions and therefore promote denitrification within the vadose zone for different stratigraphic configurations, although this was not further explored in this study.

In another study, Schmidt et al. found a threshold infiltration rate of 0.7 m d-1 for a three hectare recharge pond located in the Pajaro Valley of central coastal California, such that no denitrification occurred when this threshold was reached. For our simulations, we used a fixed, average infiltration rate of 0.17 cm hr-1 for our all-at-once and incremental AgMAR scenarios, however, application rates can be expected to be more varied under natural field settings. Our results further indicate that the all-at-once higher hydraulic loading, in addition to causing increased levels of saturation and decrease in O2, resulted in leaching of DOC to greater depths in comparison to lower, incremental hydraulic loading scenarios . Akhavan et al. 2013 found similar results for an infiltration basin wherein 1.4% higher DOC levels were reported at depths down to 4 m when hydraulic loading was increased. Because organic carbon is typically limited to top 1 m in soils , leached DOC that has not been microbially processed could be an important source of electron donors for denitrification at depth. Systems that are already rich in DOC within the subsurface are likely to be more effective in denitrifying, and thus attenuating, NO3 – , such as floodplains, reactive barriers in MAR settings, or potentially, organically managed agroecosystems .. This finding can also be exploited in agricultural soils by using cover crop and other management practices that increase soluble carbon at depth and therefore remove residual N from the vadose zone . While lower denitrification capacity was estimated for scenarios S2 and S3, an advantage of incremental application was that NO3 – concentration was not transported to greater depths. Thus, higher NO3 – concentration was confined to the root zone. If NO3 – under these scenarios stays closer to the surface, where microbial biomass is higher, and where roots, especially in deep rooted perennial systems such as almonds, can access it, it could ultimately lead to less NO3 – lost to groundwater. While there is potential for redistribution of this NO3 – via wetting and drying cycles, future modeling studies should explore multi-year AgMAR management strategies combined with root dynamics to understand N cycling and loading to groundwater under long-term AgMAR. Simulation results indicate that wetter antecedent moisture conditions promote water and NO3 – to move deeper into the domain compared to the drier base case simulation. This finding has been noted previously in the literature, however, disagreement exists on the magnitude and extent to which antecedent moisture conditions affect water and solute movement and is highly dependent on vadose zone characteristics. For example, in systems dominated by macropore flow, higher antecedent soil moisture increased the depth to which water and solutes were transported . In a soil with textural contrast, where hydraulic conductivity between the topsoil and subsoil decreases sharply, drier antecedent moisture conditions caused water to move faster and deeper into the profile compared to wetter antecedent moisture conditions . In our system, where a low-permeability layer lies above a high permeability layer , the reverse trend was observed. Thus, a tight coupling of stratigraphic heterogeneity and antecedent moisture conditions interact to affect both NO3 – transport and cycling in the vadose zone, which should be considered while designing AgMAR management strategies to reduce NO3 – contamination of groundwater. Furthermore, dry and wet cycles affect other aspects of the N cycle that were not included in this study .

A total of 1,237 putative transcription factors were identified in the L. decemlineata predicted proteome

The heart of new agricultural paradigms for a hotter and more populous world must be systems that close the loop of nutrient flows from microorganisms and plants to animals and back, powered and irrigated as much as possible by sunlight and seawater. This has the potential to decrease the land, energy, and freshwater demands of agriculture, while at the same time ameliorating the pollution currently associated with agricultural chemicals and animal waste. The design and large scale implementation of farms based on nontraditional species in arid places will undoubtedly pose new research, engineering, monitoring, and regulatory challenges, with respect to food safety and ecological impacts as well as control of pests and pathogens. But if we are to resume progress toward eliminating hunger, we must scale up and further build on the innovative approaches already under development, and we must do so immediately.The Colorado potato beetle, Leptinotarsa decemlineata Say 1824 , is widely considered one of the world’s most successful globally-invasive insect herbivores, with costs of ongoing management reaching tens of millions of dollars annually and projected costs if unmanaged reaching billions of dollars. This beetle was first identified as a pest in 1859 in the Midwestern United States, after it expanded from its native host plant, Solanum rostratum , onto potato . As testimony to the difficulty in controlling L. decemlineata, the species has the dubious honor of starting the pesticide industry, when Paris Green acetoarsenite was first applied to control it in the United States in 1864 . Leptinotarsa decemlineata is now widely recognized for its ability to rapidly evolve resistance to insecticides, as well as a wide range of abiotic and biotic stresses ,plastic gutter and for its global expansion across 16 million km2 to cover the entire Northern Hemisphere within the 20th century .

Over the course of 150 years of research, L. decemlineata has been the subject in more than 9,700 publications ranging from molecular to organismal biology from the fields of agriculture, entomology, molecular biology, ecology, and evolution. In order to be successful, L. decemlineata evolved to exploit novel host plants, to inhabit colder climates at higher latitudes , and to cope with a wide range of novel environmental conditions in agricultural landscapes. Genetic data suggest the potato-feeding pest lineage directly descended from populations that feed on S. rostratum in the U.S. Great Plains. This beetle subsequently expanded its range northwards, shifing its life history strategies to exploit even colder climates, and steadily colonized potato crops despite substantial geographical barriers. Leptinotarsa decemlineata is an excellent model for understanding pest evolution in agroecosystems because, despite its global spread, individuals disperse over short distances and populations often exhibit strong genetic diferentiation, providing an opportunity to track the spread of populations and the emergence of novel phenotypes. The development of genomic resources in L. decemlineata will provide an unparalleled opportunity to investigate the molecular basis of traits such as climate adaptation, herbivory, host expansion, and chemical detoxifcation. Perhaps most significantly, understanding its ability to evolve rapidly would be a major step towards developing sustainable methods to control this widely successful pest in agricultural settings. Given that climate is thought to be the major factor in structuring the range limits of species, the latitudinal expansion of L. decemlineata, spanning more than 40° latitude from Mexico to northern potato-producing countries such as Canada and Russia, warrants further investigation.

Harsh winter climates are thought to present a major barrier for insect range expansions, especially near the limits of a species’ range. To successfully overwinter in temperate climates, beetles need to build up body mass, develop greater amounts of lipid storage, have a low resting metabolism, and respond to photoperiodic keys by initiating diapause. Although the beetle has been in Europe for less than 100 years, local populations have demonstrating remarkably rapid evolution in life history traits linked to growth, diapause and metabolism. Understanding the genetic basis of these traits, particularly the role of specific genes associated with metabolism, fatty acid synthesis, and diapause induction, could provide important information about the mechanism of climate adaptation. Although Leptinotarsa decemlineata has long-served as a model for the study of host expansion and herbivory due to its rapid ability to host switch, a major outstanding question is what genes and biological pathways are associated with herbivory in this species? While >35,000 species of Chrysomelidae are well-known herbivores, most species feed on one or a few host species within the same plant family. Within Leptinotarsa, the majority of species feed on plants within Solanaceae and Asteraceae, while L. decemlineata feeds exclusively on solanaceous species. It has achieved the broadest host range amongst its congeners , potato , eggplant , silverleaf nightshade , horsenettle , bittersweet nightshade , tomato , and tobacco, and exhibits geographical variation in the use of locally abundant Solanum species. Another major question is what are the genes that underlie the beetle’s remarkable capacity to detoxify plant secondary compounds and are these the same biological pathways used to detoxify insecticidal compounds? Solanaceous plants are considered highly toxic to a wide range of insect herbivore species, because they contain steroidal alkaloids and glycoalkaloids, nitrogen-containing compounds that are toxic to a wide range of organisms, including bacteria, fungi, humans, and insects, as well as glandular trichomes that contain additional toxic compounds.

In response to beetle feeding, potato plants upregulate pathways associated with terpenoid, alkaloid, and phenylpropanoid biosynthesis, as well as a range of protease inhibitors. A complex of digestive cysteine proteases is known to underlie L. decemlineata’s ability to respond to potato-induced defenses. There is evidence that larvae excrete and perhaps even sequester toxic plant-based compounds in the hemolymph. Physiological mechanisms involved in detoxifying plant compounds, as well as other xenobiotics, have been proposed to underlie pesticide resistance. To date, while cornerstone of L. decemlineata management has been the use of insecticides, the beetle has evolved resistance to over 50 compounds and all of the major classes of insecticides. Some of these chemicals have even failed to control L. decemlineata within the first year of release, and notably, regional populations of L. decemlineata have demonstrated the ability to independently evolve resistance to pesticides and to do so at different rates. Previous studies have identified target site mutations in resistance phenotypes and a wide range of genes involved in metabolic detoxifcation, including carboxylesterase genes, cytochrome P450s,blueberry container and glutathione S-transferase genes. To examine evidence of rapid evolutionary change underlying L. decemlineata’s extraordinary success utilizing novel host plants, climates, and detoxifying insecticides, we evaluated structural and functional genomic changes relative to other beetle species, using whole-genome sequencing, transcriptome sequencing, and a large community-driven bio-curation effort to improve predicted gene annotations. We compared the size of gene families associated with particular traits against existing available genomes from related species, particularly those sequenced by the i5k project , an initiative to sequence 5,000 species of Arthropods. While efforts have been made to understand the genetic basis of phenotypes in L. decemlineata , previous work has been limited to candidate gene approaches rather than comparative genomics. Genomic data can not only illuminate the genetic architecture of a number of phenotypic traits that enable L. decemlineata to continue to be an agricultural pest, but can also be used to identify new gene targets for control measures. For example, recent efforts have been made to develop RNAi-based pesticides targeting critical metabolic pathways in L. decemlineata. With the extensive wealth of biological knowledge and a newly-released genome, this beetle is well-positioned to be a model system for agricultural pest genomics and the study of rapid evolution.A single female L. decemlineata from Long Island, NY, USA, a population known to be resistant to a wide range of insecticides, was sequenced to a depth of ~140x coverage and assembled with ALLPATHS followed by assembly improvement with ATLAS . The average coleopteran genome size is 760 Mb , while most of the beetle genome assemblies have been smaller . The draf genome assembly of L. decemlineata is 1.17 Gb and consists of 24,393 scafolds, with a N50 of 414 kb and a contig N50 of 4.9 kb. This assembly is more than twice the estimated genome size of 460 Mb, with the presence of gaps comprising 492 Mb, or 42%, of the assembly.

As this size might be driven by underlying heterozygosity, we also performed scafolding with REDUNDANS, which reduced the assembly size to 642 Mb, with gaps reduced to 1.3% of the assembly. However, the REDUNDANS assembly increased the contig N50 to 47.4 kb, the number of scafolds increased to 90,205 and the N50 declined to 139 kb. By counting unique 19 bp kmers and adjusting for ploidy, we estimate the genome size as 816.9 Mb. Using just the small-insert 100 bp PE reads, average coverage was 24X for the ALLPATHS assembly and 26X for the REDUNDANS assembly. For all downstream analyses, the ALLPATHS assembly was used due to its increased scafold length and reduced number of scafolds. The number of genes in the L. decemlineata genome predicted based on automated annotation using MAKER was 24,671 gene transcripts, with 93,782 predicted exons, which surpasses the 13,526–22,253 gene models reported in other beetle genome projects. This may be in part due to fragmentation of the genome, which is known to infate gene number estimates. To improve our gene models, we manually annotated genes using expert opinion and additional mRNA resources . A total of 1,364 genes were manually curated and merged with the unedited MAKER annotations to produce an official gene set of 24,850 transcripts, comprised of 94,859 exons. A total of 12 models were curated as pseudogenes.The predicted number of TFs is similar to some beetles, such as Anoplophora glabripennisand Hypothenemus hampei , but substantially greater than others, such as Tribolium castaneum , Nicrophorus vespilloides , and Dendroctonus ponderosae. We assessed the completeness of both the ALLPATHS and REDUNDANS assemblies, and the OGS separately, using bench marking sets of universal single-copy orthologs based on 35 holometabolous insect genomes, as well as manually assessing the completeness and co-localization of the homeodomain transcription factor gene clusters in the ALLPATHS assembly. Using the reference set of 2,442 BUSCOs, the ALLPATHS genome assembly, REDUNDANS genome assembly, and OGS were 93.0%, 91.9%, and 71.8% complete, respectively. We found an additional 4.1%, 5.4%, and 17.9% of the BUSCOs present but fragmented in each dataset, respectively. For the highly conserved Hox and Iroquois Complex clusters, we located and annotatedcomplete gene models in the ALLPATHS genome assembly for all 12 expected orthologs, but these were split across six different scafolds . All linked Hox genes occurred in the expected order and with the expected, shared transcriptional orientation, suggesting that the current draf assembly was correct but incomplete . Assuming direct concatenation of scafolds, the Hox cluster would span a region of 3.7 Mb, similar to the estimated 3.5 Mb Hox cluster of A. glabripennis. While otherwise highly conserved with A. glabripennis, we found a tandem duplication for Hox3/zen and an Antennapedia-class homeobox gene with no clear ortholog in other arthropods. We also assessed the ALLPATHS genome assembly for evidence of contamination using a Blobplot , which identified a small proportion of the reads as putative contaminants .We estimated a phylogeny among six coleopteran genomes using a conserved set of single copy orthologs and compared the official gene set of each species to understand how gene families evolved the branch representing Chysomelidae. Leptinotarsa decemlineata and A. glabripennisare sister taxa , as expected for members of the same superfamily Chrysomeloidea. We found 166 rapidly evolving gene families along the L. decemlineata lineage , 142 of which are rapid expansions and the remaining 24 rapid contractions . Among all branches of our coleopteran phylogeny, L. decemlineata has the highest average expansion rate , the highest number of genes gained, and the greatest number of rapidly evolving gene families. Examination of the functional classification of rapidly evolving families in L. decemlineata indicates that a subset of families are clearly associated with herbivory. Te peptidases, comprising several gene families that play a major role in plant digestion, displayed a significant expansion in genes . While olfactory receptor gene families have rapidly contracted , subfamilies of odorant binding proteins and gustatory receptors have grown . The expansion of gustatory receptor subfamilies are associated with bitter receptors, likely reflecting host plant detection of nightshades .