The total percent dieback was assessed at each site as a measure of canopy health

Infection and dieback severity varied considerably across the landscape, however, there is some evidence to suggest that populations at lower elevations may be at highest risk for severe dieback, either due to increased water stress, close proximity to fungal inoculum sources, or both. Additionally, shrubs located on southwest-facing slopes may also be more vulnerable due to increased sun exposure and thus environmental stress. Management efforts may want to focus on these areas when this region experiences future drought. Finally, although extreme dieback was recorded throughout the study, none of the observed shrubs succumbed to mortality. This may be the result of overall physiological resiliency, and the ability of adult shrubs to allocate resources to keep portions of the canopy alive. It could also be that the region’s slightly more mesic climate offers a climatic buffer that prevented shrubs from reaching their mortality thresholds. More research is needed to identify these exact mechanisms and thresholds in A. glauca. Collectively, the results of this dissertation work provide valuable knowledge on the severe dieback of an important chaparral shrub during an historic drought, with the potential for ecologically and economically costly consequences. Additionally, the data I present provide insight into the scale and progression of A. glauca dieback in a chaparral system, and potential patterns of future dieback in the face of predicted climate change. Future research that seeks to further resolve landscape and environmental variables contributing to plant stress would help in identifying these patterns.

Heterogeneity and rugged topography across the landscape, round garden pot while likely beneficial for the resilience of regional A. glauca populations during drought, presents significant challenges for on-the-ground monitoring. Out of necessity for safe access , many of the plants surveyed were located on the outer boundaries of stands, where edge effects may have been a factor. Monitoring intact, undisturbed stands using drones would yield valuable additional insight into the extent of disease deeper into stands and in stands on steep terrain or that are outside of normal visual range. The challenges of working in rugged landscapes covered in impenetrable vegetation highlight the need for using and refining remote sensing technologies, such as drone imaging, Light Detection and Ranging , and hyperspectral imaging as monitoring tools. Large-scale, long-term monitoring using these tools would allow researchers to retrieve data in areas that have previously been inaccessible, while also gaining a larger scale understanding of drought impacts. They ultimately will enable future studies to reveal more nuanced patterns across the landscape and between years of varying climatic conditions.Plant pest and disease outbreaks play a major role in shaping ecosystems around the world. Outbreaks can alter ecosystem structure and function, often with substantial consequences . Over the past 200 years, pest/disease outbreaks have increased due to mass exchange of biological materials from global trade and a rise in unusual climate events resulting from global climate change .

Prolonged climate irregularities can subject plants to environmental stress outside of their normal resistance thresholds and make them susceptible to pests and pathogens . For example, the increase in extreme droughts, defined here as greater in intensity and duration than historical drought regimes, has been directly linked to enhanced mortality in woody plant systems worldwide, often in association with pest/pathogen outbreaks . Plant disease outbreaks are often economically costly , and can result in loss of ecosystem services in natural ecosystems. With global trade continuing to spread pests and pathogens, and global change-type drought events predicted to increase , incidences of plant disease outbreaks are expected to increase. Understanding the role of drought and pathogens in plant dieback and mortality is therefore of critical importance. Latent fungal pathogens are of particular concern for natural ecosystems yet their ecological roles remain poorly understood. These pathogens can live as asymptomatic endophytes within their hosts and remain undetected for long periods of time . The Botryosphaeriaceae fungi, a groupthat causes considerable damage to hundreds of agricultural, ornamental, and naturally occurring host species around the world , includes many latent fungal pathogens that are difficult to detect in wild plant populations. Members of this diverse family can occur as endophytes, pathogens, and saprophytes on diverse woody hosts . They are best known as pathogens that cause leaf spots, cankers, severe branch dieback, and death in economically important hosts such as grapevines , avocado , and eucalyptus . While Bot. fungi are rapidly becoming one of the most important agents of disease in agricultural plant hosts , relatively few studies have been conducted on these pathogens in natural systems .

The Bot. fungi have a long history of taxonomic confusion, in part due to indistinctive morphological characteristics among species and from other fungal taxa, as well as historically poor and inconsistent descriptions early on in their discovery . Furthermore, Bot. host specificity and pathogenicity can vary widely among species and across geographical regions, complicating our understanding of their influence in various host species and across systems . While advances in molecular sequencing and data basing have added clarity in this area , challenges remain in understanding the diversity and pathogenicity of Bot. species among hosts and across regions. As a result, there is a dearth of knowledge on their ecological roles, particularly in native ecosystems.One consistent finding is that disease outbreaks from Bot. fungi in agriculture are often associated with environmental stress, such as extreme heat fluctuations and drought . Furthermore, studies have shown latent pathogens like Bots cause more damage to water-stressed hosts , and some Bot. species have been shown to grow well in water potentials much lower than what their plant hosts can tolerate , suggesting drought conditions increase virulence by these pathogens. Therefore, regions that have historically dry climates or experience periodic extreme drought may be especially vulnerable to disease outbreaks from latent pathogens as they are predicted to experience an increase in drought events due to climate change . Mediterranean-type climate areas are projected to be global change “hot spots” , and dry shrublands are predicted to experience some of the most rapid increases in mean temperatures . Indeed, recent drought-related morality in California’s semi-arid Mediterranean climate shrublands has provided support for these predictions . Furthermore, the combination of dense human settlement and agricultural lands in close proximity to many natural shrubland habitats in southern California creates a likely pathway for exotic pathogen introductions and movement of pathogens from agricultural settings into wildland species. Not surprisingly, Bot. species have been retrieved on a variety of native chaparral shrub species in California, including Ceanothus spp. , Malosma laurina , and other species of Arctostaphylos . Understanding the response of native species and these pathogens to extreme weather conditions will help to predict future vegetation change and potential species losses . From 2011-2018, southern California experienced one of the most severe droughts in recorded history, with 2014 being the driest in the past 1,200 years . Field observations in winter 2014 identified high levels of branch dieback, and in some cases mortality, large round plant pots in a common ecologically important shrub, Arctostaphylos glauca in coastal California. Two well-known Bot. species were isolated from the symptomatic shrubs . Like other members of the Bot. family, both N. australe and B. dothidea infect a broad range of hosts, and are known to be responsible for disease outbreaks associated environmental stress in agricultural species . While B. dothidea is well established in California, with over 35 different host species having been identified , phylogenic evidence suggests N. australe may be more recently introduced . Its impact on shrublands of California has not been quantified.

Preliminary observations suggested high levels of branch dieback, and in some cases mortality, at lower elevation sites and along exposed ridges compared to higher elevations in coastal montane settings. We hypothesized that identifiable patterns would exist in the distribution of B. dothidea and N. australe across these landscapes that correlate with branch dieback and environmental variables associated with drought stress. Manzanita dieback has previously been causally associated with Bot. infection . A greenhouse experiment by Drake-Schultheis et al. , revealed that drought enhances onset of stress symptoms and mortality in young A. glauca inoculated with N. australe compared to shrubs subjected to drought or inoculation alone. However, to the authors’ knowledge no previous quantitative studies exist on the distribution of Bot. species in California shrubland environments with Mediterranean climates. To better understand the occurrence, distribution, and severity of Bot. infections in chaparral shrublands, we surveyed infection in A. glauca between April and September 2019. We also collected data on site elevation, aspect, and average percent canopy dieback at each site sampled for infection. While a variety of landscape variables are likely to influence plant stress at any given site , we focused on elevation because A. glauca already tends to occur mostly on xeric and rocky soils of exposed slopes, and therefore elevation was presumed to be the most significant factor influencing precipitation and water availability in this setting. Also, other studies have used elevation as a proxy for climate variation . We also recorded aspect of each sampled shrub since it influences sun exposure, temperature, and water stress. To test our hypothesis that Bot. fungi and level of stress each played a role in extensive canopy dieback in A. glauca, the following questions were addressed: What is the distribution of Bot. infection in A. glauca stands across the chaparral landscape in coastal Santa Barbara County? How do levels of infection by the two Bot. fungi, N. australe and B. dothidea, compare across elevation? and How do stand-level infection and elevation correlate with dieback severity? We predicted N. australe and B. dothidea to be presentacross all sites and elevations, but also that N. australe, having been previously isolated with high frequency in the area , would likewise have the greatest incidence in this study. Furthermore, we expected levels of Bot. infection and dieback severity to be greater at lower elevations compared to higher elevations, because lower sites typically receive less annual rainfall, thus exacerbating drought stress. This study presents the first quantitative survey summarizing the severity and distribution of Bot. fungi in natural shrublands, and seeks to identify important patterns of infection and dieback in A. glauca to predict future vulnerabilities across the landscape.The study sites were located on the generally south-facing coastal slopes of the Santa Ynez Mountains in Santa Barbara, California, USA . The sites range from a lower elevation of ~550m to an upper elevation of ~1145m, and cover an area of ~47km2 . This region is characterized by a Mediterranean climate, with wet winters and hot, dry summers. Mean annual precipitation ranges from 68.4cm at lower elevations to 90.6cm at upper elevations . During the 2013-2014 wet season, which was two years into a multiyear drought and one of the driest years on record in California , these areas received only 24.8cm and 31.6cm precipitation, respectively .Sites were initially randomly generated from polygons drawn in the field around relatively pure stands of A. glauca , and Drake-Schultheis, unpublished data. Polygons were then categorized according to elevation , and numbered within their respective elevation categories. Ten sites per elevation zone were randomly selected using random number generator for a total of 30 sites. When necessary, some randomly generated sites were substituted with nearby stands that were more accessible. Furthermore, any randomly selected sites that were discovered to be in recent fire scars were exchanged for nearby stands that contained intact, mature A. glauca.Elevation data were collected in situ using Altimeter GPS Pro and corroborated using Google Earth . Aspect was recorded in situ in degrees, then converted to radians and transformed to linear data for analysis of “southwestness” using cos according to Beers et al. . This yielded aspect values ranging from -1 to 1 , which were then used for modeling the effects of aspect on shrub dieback and Bot. infection. Sites were demarcated by >50% A. glauca cover within a stand, as determined by visual on the-ground assessments where the tops of the canopies could be viewed. Stand dieback was then visually estimated by two-to-three people as the percent of “non-green” vegetation compared to live, green vegetation within the defined boundaries of a site . Categories of NGV included yellow, brown, and black leaves, and bare/defoliated canopy, and percentages were summed to reflect total NGV within a site.

Both groups performed established techniques in the field to estimate dairy farm emissions

As wind turbines account for a major proportion of the wind farm initial investment, previous researches mainly focused on optimizing the number and locations of wind turbines. In [6] and [7], the genetic algorithm was proposed to iteratively optimize gridded turbine layouts on flat terrains. In [8] and [9], the greedy algorithm was introduced to search for the optimal gridded turbine layouts on hilly wind farms. The modified particle swarm optimization algorithm was applied to solve the turbine layout optimization problem in a continuous solution space. The civil works include the construction of access roads, turbine foundations, crane hard-standings, cable-trenching, and substation buildings. Their cost can be substantial. Access roads are greatly influenced by topographic characteristics, and their cost is much higher for hilly or mountainous wind farms than plain ones. A comparative strategy and the minimum spanning tree were proposed to search for the shortest road network for wind farms in flat areas. In [11], the Euclidean Steiner tree was introduced to further reduce the length of the road network. However, these methods are not directly applicable to hilly or mountainous wind farms where the road gradient must be considered to build winding roads capable of accommodating significant weight. Wind farms are generally constructed in rural areas with challenging topography. With the rapid expansion of wind farms, black plastic planting pots new farms have to be built in hills or mountains, where designing suitable access roads for construction, turbine erection and maintenance is not straight forward.

It is necessary to know roughly which route should be chosen to connect turbines and how much they will cost, while placing turbines at the peaks for high energy production. The selection of appropriate access roads is essential for the overall planning of the wind farm construction. A well-chosen road network can also reduce the wind farm construction period and lower the environmental impact. In this paper, the constraints on the maximal gradient of access roads are guaranteed when developing an automatic contour-based road-network design model. The design of access roads and the evaluation of their cost are simultaneously considered in the process of optimizing turbine layouts, which results in a more technically feasible and economically beneficial micrositing of wind farms. The remainder of this paper is organized as follows. Section II proposes an automatic model for road network design. The problem of wind farm micrositing is formulated in Section III. Section IV gives the simulation results. Finally, Section V makes some concluding remarks.In civil engineering, a feasible route between the starting and end points is selected by establishing a series of control points on the terrain. Using these control points as an initial alignment, the horizontal and vertical curves are then located, subject to the road design constraints including geometric specifications and environmental requirements. Heuristic optimization algorithms such as the genetic algorithm, the Tabu search, and the simulated annealing algorithm have been applied to optimize horizontal and vertical alignments of roads. These algorithms depend on a high-resolution digital elevation model to support the analysis of road design features such as ground slopes and other landform characteristics.

The calculation of an optimal route between two given locations is time consuming and could even take several hours. The micrositing optimization process usually involves hundreds or thousands of iterations to evaluate the wake effects between turbines and to update the turbine layout. The process without road design is already intensive. If a complex road network for the whole farm needs to be designed in each iteration by the aforementioned heuristic optimization algorithms, the computation burden is prohibitive and even impossible. In this paper, only the control points of access roads are selected during the micrositing optimization process. Such a relatively simple and functional road design is suitable for low-volume access roads, which are mainly for wind farm construction and maintenance. It is also adequate for the design at the strategic and tactical level. Based on the above assumption, a fast automatic method for road-network design is developed to estimate the cost of access roads during the optimization of wind farm micrositing.Forest engineers usually use large-scale contour maps to select preliminary routes with dividers to connect two locations, a process known as route projection. Control points of a road can be selected during the route projection process. In this paper, contour lines as a group of nested closed curves are chosen to store topographic information of wind farms. A distinct advantage of the contour-line data over a digital elevation model is that the vector representation lends itself to the object-oriented modeling of terrains, which provides a natural mechanism for sorting terrains and facilitates the search through a contour map.

Given a technically feasible gradient, a route is projected from the starting point toward the end across adjacent contour lines. This process can be efficiently automated with a mathematical model. The basic idea of the route projection process is to determine the projected route segment between two neighboring contour lines, as the elevation of the terrain rises continuously. Every projected segment of the route must begin from a point on a contour line and end on another one.Two comparative cases are studied to investigate the effectiveness of the proposed road network design model and the significance of road network optimization for farms in hills or mountains. Due to the randomness of genetic algorithms, ten independent runs are performed for each case. Table IV summarizes the results, including the best, average, and worst values of the best solutions achieved in each independent run. In Case 1, the micrositing problem is solved in a two-stage manner, i.e., we first search for an optimized turbine layout without considering access roads and then design the road network for the given layout. The best configuration of the wind farm is shown in Fig. 7, where the blue triangle represents the entrance of the wind farm, red points mean the locations of wind turbines, and black polylines are the designed road network. In Case 2, the turbine layout and its corresponding road network are optimized simultaneously, i.e., in each optimization iteration, the road network design model is performed for every individual turbine layout generated by the genetic algorithm. The best configuration of the wind farm is shown in Fig. 8. The simulation results of both cases are summarized in Table IV. Note that, the simulations for both cases are time consuming. The simulations were carried out in an IBM Blade Center HS22 with 12 Intel Xeon X5650 CPUs at a frequency 2.66 GHz. Parallel computing was employed to speed up the simulation progress. In each run of the 10 independent simulations, 12 CPUs were utilized simultaneously and each CPU was responsible for each wind direction. Even with the parallel computing Fig. 7. Best wind farm configuration of Case 1. technique, plant plastic pots the running time for each simulation is still around 7–10 h. For both turbine layouts, reasonable road networks with feasible winding roads are obtained, which demonstrates the effectiveness of the proposed road network design model. Comparing Figs. 7 and 8, it is clear that the configuration of the road network depends on the turbine layout and the farm topology considerably. Therefore, to create a cost-effective road network, it is necessary to consider these two parts simultaneously during the microsting optimization process. Indeed, since the road network is simultaneously considered in Case 2, while the cost of the road network is ignored in Case 1, Case 2 consistently has a higher net present value than Case 1 in all independent runs. In the simulations, the net present value is determined by the annual energy production and the length of the road network.Methane released into the atmosphere as a result of agricultural activity, such as enteric fermentation and anaerobic digestion, significantly contributes to overall greenhouse gas emissions in the United States . The California Air Resources Board attributes approximately 60 % of recent anthropogenic CH4 emissions in California to agriculture, with 45 % of CH4 emissions directly related to dairy farm activity for 2013 . Reduction strategies proposed by CARB seek to lower California’s CH4 emissions to 40 % below 2013 rates by 2030 , thereby emphasizing the need for accurate methods to directly quantify the contribution of different CH4 sources within agricultural operations. Estimates of CH4 emissions due to dairy livestock can be calculated using inventory emission factors combined with activity data on animal populations, animal types, and details about feed intake in a particular country . Other methods to estimate CH4 emissions from ruminants involve direct atmospheric measurements.

Emissions from dairy farms have been estimated in the Los Angeles Basin, California, using downwind airborne flux measurements . Farmscale measurements of CH4 have been made using a variety of techniques and instruments, such as open-path infrared spectrometers , tunable-infrared direct absorption spectroscopy , and column measurements employing solar absorption spectrometers with comparisons to cavity ring-down spectrometers . Several studies of various CH4 sources assert that inventory-based calculations tend to underestimate emissions compared to atmospheric observations and modeling . Atmospheric studies have often used specific gases as tracers to distinguish a sample of interest from background conditions or interferences. Tracer gases released at known rates have been employed in experiments looking at chemical transport , dispersion , source allocation , and model verification using mobile laboratories , radiosondes, sampling towers, and ground-based equipment. Application of tracer gases in agricultural studies have involved insertion of a sulfur hexafluoride permeation tube into the rumen of a cow with subsequent collection of time-integrated breath samples . Inverse-dispersion techniques have employed line-source releases of SF6 within a dairy farm combined with open-path measurements to understand whole-site emissions . Release of a tracer gas directly into the atmosphere, 2–3 m above ground level, can be used to determine and distinguish CH4 emissions from various sources within a site . This study quantifies CH4 emissions using the well-established tracer flux ratio method at two dairy farms over the course of 8 summer days . Controlled releases of tracer gas from various areas on each farm mixed with site-derived emissions were observed by an instrumented aircraft and mobile laboratory . Using this technique provided the flexibility to estimate entire dairy farm emissions and apportion emissions among sources on multiple scales. Uncertainty in measurements from low-flying airborne studies has been attributed to the need to extrapolate results below the minimum safe flight heights as regulated by the Federal Aviation Administration . Prior to this study, Aerodyne Research, Inc. performed controlled ground releases of ethane in Colorado and Arkansas, while Scientific Aviation made measurements in a similar aircraft to the one used in this study . The original release rate of C2H6 was estimated via a refined mass balance technique, with a +2 % difference observed during tests in Colorado and +24 % difference in Arkansas as described in Conley et al. . These releases did not correspond to any CH4 source but demonstrated the feasibility of using a low-flying aircraft to successfully quantify flow rates from controlled tracer gas releases. Using tracer flux ratio in this study, we again utilized the aircraft to detect emitted tracer gas and then compared with dairy farm emissions to evaluate CH4 emission rates. This field study was originally focused on estimating CH4 emissions from dairy farms and distinguishing on-site sources using established techniques . An intentional effort was made to align measurement time windows of the mobile laboratory and aircraft for the purpose of inter-comparison between the tracer flux ratio and mass balance methods. As a result, the aircraft was exposed to several hours of ground-released tracer gas. Due to this overlap in time, we were able to further assess the viability of observing enhanced concentrations of a ground-released tracer gas from an aircraft at low flow rates, compare CH4 and C2H6 enhancements emitted from within dairy farms via tracer flux ratio to determine emission rates, and directly compare the application of tracer flux ratio methodology to simultaneous ground and airborne measurements of the same air mass.In a collaborative effort, SA and ARI attempted a flight-based tracer release experiment to quantify CH4 emissions from two dairy farms in central California. This study reanalyzes data collected as part of an Environmental Defense Fund coordinated project that occurred in June 2016 . ARI employed tracer flux ratio methodology with two tracer gases and a mobile laboratory, while SA conducted a mass balance experiment from a light aircraft.

These arrangements allow collaborating institutions to work toward a greater good

While surveys often provide a way to overcome time and budget constraints to learn about farmer knowledge, this study suggests that to achieve specificity and depth in analysis of farmer knowledge requires an interactive approach that includes – at a minimum – relationship building, multiple field visits, and in-depth, multi-hour interviews. Accessing farmer knowledge necessitates locally interactive research; this knowledge may or may not be immediately generalizable or scalable without further locally interactive assessment in other farming regions.The impact of the COVID-19 pandemic caused by the novel severe acute respiratory syndrome coronavirus 2 was foreshadowed by earlier epidemics of new or re-emerging diseases such as SARS , influenza , Middle East Respiratory Syndrome , Ebola , and Zika affecting localized regions . These events showed that novel and well-known viral diseases alike can pose a threat to global health. In 2014, an article published in Nature Medicine stated that the Ebola outbreak should have been “a wake-up call to the research and pharmaceutical communities, and to federal governments, of the continuing need to invest resources in the study and cure of emerging infectious diseases” . Recommendations and even new regulations have been implemented to reduce the risk of zoonotic viral infections , but the extent to which these recommendations are applied and enforced on a regional and, more importantly, local level remains unclear. Furthermore, large plastic gardening pots most vaccine programs for SARS, MERS, and Zika are still awaiting the fulfillment of clinical trials, sometimes more than 5 years after their initiation, due to the lack of patients .

In light of this situation, and despite the call to action, the SARS-CoV-2 pandemic has resulted in nearly 20 million infections and more than 700,000 deaths at the time of writing based on the Johns Hopkins University Hospital global database. The economic impact of the pandemic is difficult to assess, but support programs are likely to cost more than €4 trillion in the United States and EU alone. Given the immense impact at both the personal and economic levels, this review considers how the plant-based production of recombinant proteins can contribute to a global response in such an emergency scenario. Several recent publications describe in broad terms how plant-made countermeasures against SARSCoV-2 can contribute to the global COVID-19 response . This review will focus primarily on process development, manufacturing considerations, and evolving regulations to identify gaps and research needs, as well as regulatory processes and/or infrastructure investments that can help to build a more resilient pandemic response system. We first highlight the technical capabilities of plants, such as the speed of transient expression, making them attractive as a first-line response to counter pandemics, and then we discuss the regulatory pathway for plant-made pharmaceuticals in more detail. Next, we briefly present the types of plant-derived proteins that are relevant for the prevention, treatment, or diagnosis of disease. This sets the stage for our assessment of the requirements in terms of production costs and capacity to mount a coherent response to a pandemic, given currently available infrastructure and the intellectual property landscape. We conclude by comparing plant-based expression with conventional cell culture and highlight where investments are needed to adequately respond to pandemic diseases in the future.

Due to the quickly evolving information about the pandemic, our statements are supported in some instances by data obtained from web sites . Accordingly, the scientific reliability has to be treated with caution in these cases.A major advantage of plants in this respect is the ability to test multiple product candidates and expression cassettes in parallel by the simple injection or infiltration of leaves or leaf sections with a panel of Agrobacterium tumefaciens clones carrying each variant cassette as part of the transferred DNA in a binary transformation vector . This procedure does not require sterile conditions, transfection reagents, or skilled staff, and can, therefore, be conducted in standard bio-safety level 1 laboratories all over the world. The method can produce samples of even complex proteins such as glycosylated monoclonal antibodies for analysis ~14 days after the protein sequence is available. With product accumulation in the range of 0.1–4.0 g kg−1 biomass , larger-scale quantities can be supplied after 4–8 weeks , making this approach ideal for emergency responses to sudden disease outbreaks. Potential bottlenecks include the preparation of sufficiently large candidate libraries, ideally in an automated manner as described for conventional expression systems, and the infiltration of plants with a large number of candidates. Also, leaf-based expression can result in a coefficient of variation >20% in terms of recombinant protein accumulation, which reduces the reliability of expression data . The variability issue has been addressed to some extent by a parallelized leaf-disc assay at the cost of a further reduction in sample throughput .

The reproducibility of screening was improved in 2018 by the development of plant cell pack technology, in which plant cell suspension cultures deprived of medium are used to form a plant tissue surrogate that can be infiltrated with A. tumefaciens in a 96-well microtiter plate format to produce milligram quantities of protein in an automated, high-throughput manner. The costs can be as low as €0.50 per 60-mg sample with a product accumulation of ~100 mg kg−1 and can typically result in a CV of <5% . These costs include the fermenter-based upstream production of plant cells as well as all materials and labor. The system can be integrated with the cloning of large candidate libraries, allowing a throughput of >1,000 samples per week, and protein is produced 3 days after infiltration. The translatability of cell pack data to intact plants was successfully demonstrated for three mAbs and several other proteins, including a toxin . Therefore, cell packs allow the rapid and automated screening of product candidates such as vaccines and diagnostic reagents. In addition to recombinant proteins, the technology can, in principle, also be used to produce virus-like particles based on plant viruses, which further broadens its applicability for screening and product evaluation but, to our knowledge, according results had not been published as of September 2020. In the future, plant cell packs could be combined with a recently developed method for rapid gene transfer to plant cells using carbon nanotubes . Such a combination would not be dependent on bacteria for cloning or gene transfer to plant cells , thereby reducing the overall duration of the process by an additional 2–3 days . For the rapid screening of even larger numbers of candidates, cost-efficient cell-free lysates based on plant cells have been developed and are commercially available in a ready-to-use kit format. Proteins can be synthesized in ~24 h, potentially in 384-well plates, and the yields expressed as recombinant protein mass per volume of cell lysate can reach 3 mg ml−1 . Given costs of ~€1,160  ml−1 according to the manufacturer LenioBio , this translates to ~€400 mg−1 protein, an order of magnitude less expensive than the SP6 system , which achieves 0.1 mg ml−1 at a cost of ~€360  ml−1 based on the company’s claims. Protocol duration and necessary labor are comparable between the two systems and so are the proteins used to demonstrate high expression, e.g., luciferase. However, the scalability of the plantcell lysates is currently limited to several hundred milliliters, and transferability to intact plants has yet to be demonstrated, i.e., information about how well product accumulation in lysates correlates with that in plant tissues. Such correlations can then form the basis to scale-up lysate-based production to good manufacturing practice -compliant manufacturing in plants using existing facilities. Therefore, the cell packs are currently the most appealing screening system due to their favorable balance of speed, throughput, large plastic growing pots and translatability to whole plants for large-scale production. In any pandemic, the pathogen genome has to be sequenced, made publically available, and freely disseminated in the global scientific community to accelerate therapeutic and vaccine development. Once sequence information is available, a high priority is the rapid development, synthesis, and distribution of DNA sequences coding for individual viral open reading frames. These reagents are not only important for screening subunit vaccine targets but also as enabling tools for research into the structure, function, stability, and detection of the virus .

Because many viral pathogens mutate over time, the sequencing of clinical virus samples is equally important to enable the development of countermeasures to keep pace with virus evolution . To ensure the broadest impact, the gene constructs must be codon optimized for expression in a variety of hosts ; cloned into plasmids with appropriate promoters, purification tags, and watermark sequences to identify them as synthetic and so that their origin can be verified ; and made widely available at minimal cost to researchers around the world. Not-for-profit plasmid repositories, such as Addgene and DNASU, in cooperation with global academic and industry contributors, play an important role in providing and sharing these reagents. However, the availability of codon-optimized genes for plants and the corresponding expression systems is often limited . For example, there were 41,247 mammalian, 16,560 bacterial, and 4,721 yeast expression vectors in the Addgene collection as of August 2020, but only 1,821 for plants, none of which contained SARS-CoV-2 proteins. Sharing plant-optimized SARS-CoV-2 synthetic biology resources among the academic and industry research community working on PMPs would further accelerate the response to this pandemic disease. Screening and process development can also be expedited by using modeling tools to identify relevant parameter combinations for experimental testing. For example, initial attempts have been made to establish correlations between genetic elements or protein structures and product accumulation in plants . Similarly, heuristic and model-based predictions can be used to optimize downstream processing unit operations including chromatography . Because protein accumulation often depends on multiple parameters, it is typically more challenging to model than chromatography and probably needs to rely on data-driven rather than mechanistic models. Based on results obtained for antibody production, a combination of descriptive and mechanistic models can reduce the number of experiments and thus the development time by 75% , which is a substantial gain when trying to counteract a global pandemic such as COVID-19. These models are particularly useful if combined with the high-throughput experiments described above. Techno-economic assessment computer aided design tools, based on engineering process models, can be used to design and size process equipment, solve material and energy balances, generate process flow sheets, establish scheduling, and identify process bottlenecks. TEA models have been developed and are publicly available for a variety of plant-based bio-manufacturing facilities, including whole plant and plant cell bioreactor processes for production of mAbs , antiviral lectins , therapeutics , and antimicrobial peptides . These tools are particularly useful for the development of new processes because they can indicate which areas would benefit most from focused research and development efforts to increase throughput, reduce process mass intensity, and minimize overall production costs.The rapid production of protein-based countermeasures for SARS-CoV-2 will most likely, at least initially, require bio-manufacturing processes based on transient expression rather than stable transgenic lines. Options include the transient transfection of mammalian cells , baculovirus-infected insect cell expression systems , cell-free expression systems for in vitro transcription and translation , and transient expression in plants . The longer term production of these countermeasures may rely on mammalian or plant cell lines and/or transgenic plants, in which the expression cassette has been stably integrated into. The speed of transient expression in plants allows the rapid adaptation of a product even when the process has already reached manufacturing scale. For example, decisions about the nature of the recombinant protein product can be made as little as 2 weeks before harvest because the cultivation of bacteria takes less than 7 days and the post-infiltration incubation of plants takes ~5–7 days. By using large-scale cryo-stocks of ready-to-use A. tumefaciens, the decision can be delayed until the day of infiltration and thus 5–7 days before harvesting the biomass . This flexibility is desirable in an early pandemic scenario because the latest information on improved drug properties can be channeled directly into production, for example, to produce gram quantities of protein that are required for safety assessment, pre-clinical and clinical testing, or even compassionate use if the fatality rate of a disease is high . Although infiltration is typically a discontinuous process requiring stainless-steel equipment due to the vacuum that must be applied to plants submerged in the bacterial suspension, most other steps in the production of PMPs can be designed for continuous operation, incorporating single-use equipment and thus complying with the proposed concept for bio-facilities of the future .

The familiar birds were confined to a holding pen in the top left corner of the arena

For Wichman and Norman et al. , the detour apparatus consisted of an arena with a starting compartment in the center, across from a holding compartment. The starting compartment consisted of three walls, two opaque and one of wire mesh, so that the holding compartment was visible through the wire mesh. Opposite the wire mesh screen was an opening in the back of the compartment allowing access to the rest of the arena. The holding compartment consisting of a screen so that its contents could be seen by the individual in the starting compartment. The test chick was placed in the central starting compartment and two of its companions from it home pen were placed in the holding compartment. To solve this task, the chicks had to walk away from the familiar chicks, out of the central staring compartment, and around this compartment to reach their companions. Wichman tested ninety 3 day old chicks on a detour test to evaluate their spatial ability. The young chicks had not yet developed perching behavior at the time they were tested. As with the radial arm experiment, all birds were raised in three different rearing environments: control, floor enrichment, and hanging enrichment. The time it took chicks to reach their companions was recorded or, if the chick was unable to successfully complete the task, square pot the trial elapsed at 10 minutes.Forty out of the ninety chicks successfully solved the detour task, however, no relationship was found between onset of perching behavior and the performance of the chicks on the detour test.

This result suggests that the detour test is not a good predictor of early perching behavior in pullets. Additionally, there was no significant effect of rearing environment complexity on the success of chicks in the detour task. Norman et al. also employed a detour task in order to assess if early access to elevated structures affects spatial working memory and route planning in laying hens. They raised chicks from one day of age in either a control or enriched treatment. The control treatment had no elevated structures and the enriched treatment had eight wooden perches arranged in an A-frame structure. In addition to the perches the enriched treatment also had a ramp leading up a platform placed on the second rung of perches. Norman et al tested 48 chicks from both rearing treatments on a detour task at either 14-15 days of age or 28-29 days of age. The test chick was placed in the starting compartment and two of its companions from their home pen were placed in the holding compartment. If the chick left the start box and began walking around the compartment, but then reentered the center of the arena, this was considered an orientation error. Chicks were given a maximum of 5 minutes to complete the task. Out of the 96 birds tested, 67 completed the detour task. There was no significant difference between treatment groups for the number of orientation errors, however the latency to complete the task was significantly shorter for the enriched as compared to the control chicks.

Morris also used a detour task to assess birds from two different rearing treatments on their spatial reasoning . For this study, the unenriched condition included shavings, a hanging feeder, and a nipple line drinker. The enriched treatment included two perches, three live plants for cover, a dust bathing box, two hanging party decorations, and hidden meal worms. At 10 weeks of age 42 birds were tested on a detour task. The target consisted of three familiar birds from their home pen. A barrier was located at the other end of the arena and was made of a mesh screen with an opening on the far-left side, across from the target. The test bird was placed in a starting location behind the barrier and was given 3 to 5 minutes to acclimate in a mesh enclosure. Once the enclosure was lifted, the birds were given 5 minutes to reach their companions. The latency of each test bird to navigate around the barrier and come within 0.5 m of the target was recorded. No significant differences between treatment groups in time taken to reach companions were found. These results contradict the finding from Norman et al. that birds with accesses to vertical structures at a young age did have a shorter latency to complete the detour task than birds without access to vertical structures. It is possible that the difference in these results could be due to the different rearing environments and detour design of these two experiments. The design of the detour task in Morris differed from the design of Norman et al. and Wichman . Instead of the chick walking away from the companion chicks to reach the goal, the Morris design required the chicks to walk towards the companions with them in view at all times through a mesh screen.

The birds were unable to take the most direct route, a diagonal path across the arena, but were not required to walk away from the target at any point to successfully solve the task. Due to this, the Morris detour task does differ from the classic detour task used by Wichman and Norman et al. . However, the inconsistent results between Wichman and Norman et al. should be taken into account when interpreting the impact of rearing environment on performance in a detour task. Due to the varying results across these studies, it is difficult to make definitive conclusions. However, the lack of relationship found by Wichman between early success on the detour task and onset of perching behavior suggests this task may not be relevant to use of vertical space.There is growing evidence in the literature that vertical complexity of the rearing environment of pullets can impact spatial abilities of domestic laying hens. However, tasks such as the hole board test, radial maze, and detour task only test spatial navigation and memory and this is done on a single geometric plane, the ground. Adult hens utilize three dimensional space and therefore their spatial cognitive abilities should also be investigated in relation to depth. While these tasks evaluate spatial memory in hens, a useful skill for finding resources in the aviary, they are not appropriate for assessing visual perception as it relates to navigating around structures in the aviary. In contrast, the jump test does use three-dimensional space and does not directly test spatial memory, making it more relevant to the use of vertical space in commercial aviaries. However, due to the increasing physical difficulty of this test, visual perception and physical skill cannot be separated. There is a great need to study the depth perception of laying hens in relation to complexity of rearing treatment by using both a floor test and a test that utilizes three dimensional space. This methodology would allow for the comparison of performance on both two-dimensional and a three-dimensional test, square plastic plant pots providing greater evidence for the translation of floor tests of spatial cognition to the use of vertical space. It is also important to investigate depth perception as this aspect of spatial cognition has not been specifically evaluated in relation to rearing environment. Adequate perception of depth seems to be vital for gauging the distance to fly or jump between perches and platforms in an aviary. Without proper depth perception, collisions and falls would be likely to occur due to misjudgment of distances.The aim of this study is to gain a better understanding of how rearing environment impacts the cognitive and spatial development of laying hens by examining the development of depth perception. Laying hens reared in three environments of differing vertical complexity were tested on a Y-maze and visual cliff task to evaluate depth perception acuity. Both tasks utilize hens’ motivation to escape after being caught and handled, by offering them the option to exit the experimental apparatuses. Previous studies have used food or social rewards in spatial cognition tasks. In this study, variance in responses due to appetite or sociality was removed by utilizing the hens’ motivation to escape. The Y-maze uses arms of varying lengths to determine if the bird can assess distance and choose the shortest path to escape. The visual cliff was used to evaluate perception of depth as well as method of crossing the visual cliff. In order to account for the disadvantages associated with the traditional visual cliff test, the plexiglass was illuminated and the table was covered with a canopy to reduce reflection.

Additionally, a perch was added in the center of the table, 15 cm above the plexiglass. This prevented tactile information from pecking the glass and the birds’ feet, further concealing the plexiglass. These tests offer greater insight into the topic of spatial cognitive development by examining a previously overlooked, yet relevant aspect of spatial cognition: depth perception.Data Collection At each of three time points a sample of 150 Dekalb White hens were tested . Due to camera malfunctions, video data from three birds were excluded from the visual cliff data set. An equal number of birds were randomly selected from each of three rearing treatments. All birds were tested on the Y-maze task the first week and the visual cliff the second week of testing. Birds were caught from their home pen just prior to testing by corralling them with a metal catch pen. All trials were recorded using a Sony Action Camera and specific behaviors were coded from the video data using Noldus Observer XT . Y-Maze The Y-maze was constructed out of 1 m high wooden boards and included a starting chamber and two arms . The starting chamber was 40 cm long and 60 cm wide, with a guillotine door. The two arms of the maze were 60 cm wide and were joined at a 50o vertex. The arms were adjustable in length so that each arm could be either 30 cm or 90 cm long. The maze was covered by soft, plastic black mesh, preventing the birds from flying out. Both arms were open at the end, allowing the bird the choice to escape into an arena via one of the arms. Chalk lines were drawn where the entrance met each arm of the maze to indicate that the bird had entered an arm of the maze. Additional lines were also drawn at 30 cm from the entrance to mark the point the bird had chosen that side of the maze. Birds were tested with two consecutive trials where the length of each arm varied so that the birds are presented with a 1:3 ratio and a 1:1 ratio .The Y-maze was randomly configured into one of three different orientations: equal length arms or unequal with either the right or left arm being shorter in length. The equal configuration had a 1:1 ratio , while the unequal configuration consisted of a 1:3 ratio . A bird was randomly selected, caught using a catch pen, and assigned an ID number. The bird was then placed into the entrance of the maze facing the vertex of the two arms of the maze. When the bird was placed, a timer was started and the guillotine door of the entrance was closed behind them. If the subject had not exited the maze after two minutes had elapsed, the guillotine door was lifted and then immediately closed to encourage movement of the bird out of the maze. Trials ended after the bird had successfully crossed one of the 30 cm lines or after 2.5 minutes had elapsed. Trials were then repeated once more with the same individual so that each bird was presented with both the equal and the unequal configuration. Time spent in each arm, exit choice, and frequency of flying within or out of maze was recorded . The exit choice was considered correct if, when presented with arms of different lengths, the bird chose the shortest path to the arena . Visual Cliff Two visual cliff tables were used. A small table was used for the 8-week-old pullets while a larger table was used for the 16 and 30-week-old birds . This allowed for the proportions of the table to remain consistent as the birds grew.

A limitation of this essay is the lack of data regarding grower characteristics

The price premium is significant for almond and alfalfa . However, organic almonds suffer from an average 20% of yield loss, which hinders the transition . For alfalfa, the price depends on the organic status as well as quality, which is hard to control for organic growers due to weed and pest pressures . The z-test results in Table 1.3 and Table 1.4 show that the coefficients of Organicare similar to or larger, in absolute value, than those in the full sample estimation, which implies that the difference between two production systems are larger in the sub-sample than full sample.Differences in environmental impacts between organic and conventional production vary across crops. The full-sample regression is estimated for selected crops individually, except for lettuce where an additional time dummy is added to split the sample in half, to highlight important patterns of pesticide use in conventional and organic production. The specifications without grower or field fixed effects provide similar results and therefore results are not presented here for individual crops. The PURE index values are plotted for conventional and organic lettuce fields in Figure 1.4. The risk index from pesticides used in conventional lettuce fields decreased since growers have gradually transitioned from organophosphates to pyrethroid and neonicotinoid insecticides over the past twenty years and organophosphate insecticides are more toxic than their pyrethroid and neonicotinoid alternatives . Prior to 2005, diazinon was the most used insecticide in conventional lettuce production while the usage of lambda-cyhalothrin , was limited in lettuce.

However, by 2015, blueberry grow bag size lambda-cyhalothrin was the most used insecticide in conventional lettuce fields while fewer than 30 acres of lettuce were treated with diazinon. Consistent with these changes, in Table 1.5, the coefficients for Organic × 06_15 are significant and positive showing that the difference in the environmental impacts from pes-ticides use between conventional and organic lettuce production decreased in the second half of the study period. In Table 1.6, differences in environmental impacts between conventional and organic strawberries are largely driven by the environmental impacts of pre-plant soil fumigation, which is used by conventional but not organic strawberry growers. Soil fumigation is a common practice for managing pathogens, nematodes, and weeds in conventional strawberry fields. While soil fumigants are most commonly regulated because of their negative effects on human health via the impact on air quality and ozone layer, most soil fumigants are also highly toxic to earthworms . Accordingly, the PUREindex for soil is large. Consequently organic strawberry production achieves a 78% reduction in the environmental impact on soil. Conventional strawberry production also poses higher impacts on surface water because several AIs are highly toxic to fish and aquatic invertebrates , including abamectin for controlling spider mites , malathion for white flies , and pyraclostrobin for gray mold . As a result, the coefficient of Organic for surface water is larger than average. The difference in the PURE index for air is smaller because azadirachtin and clarified neem oil, two primary AIs contributing to VOC emissions in the nonattainment area of Ventura , a major strawberry producing county, together accounted for 18% of treated acreage for organic strawberries.

Comparing the results in Table 1.7 with other tables in this section, organic processing tomato production reduces the environmental impact on air by a larger percentage than all organic production on average. The key difference between processing tomatoes and other crops is that processing tomatoes are more threatened by diseases than by insects or nematodes . The two most common diseases are powdery mildew and bacterial speck, which are treated with sulfur and copper hydroxide respectively in organic production . In 2015, the acreage treated with these two AIs accounted for 42% of total acreage treated for organic processing tomatoes. In comparison, the share of sulfur-and copper hydroxide-treated acreage is below 10% for production of lettuce and strawberries and 25% for all organic crops. These two AIs have lower VOC emissions than other AIs used in organic production such as pyrethrins, azadirachtin, and clarified neem oil, which together accounted for nearly 30% of treated acreage for organic lettuce and strawberries, 18% for organic processing tomatoes, and 18% for all crops. However, the impact is increasing as indicated by the positive coefficient for the variable Organic × Year. Wine grape production occurs in many regions in California, and pest and disease pressures vary across production regions due to different climate and soil conditions. In the North Coast production region, which includes Napa and Sonoma counties among others, powdery mildew is a common disease because the fungus prefers a cooler temperatures, ideally around 21C, to grow . Measured by treated acreage, 9 out of the 10 most used AIs are fungicides targeting powdery mildew in this area. In the San Joaquin Valley, in contrast, powdery mildew is rarely seen because of high temperatures. Due in part to the large number of frost-free days per growing season, insects are the primary concern .

For wine grapes, the most used AIs beside sulfur are abamectin targeting spider mites, imidacloprid targeting vine mealybugs, and methoxyfenozide targeting lepidoptera . These insecticide AIs aremore toxic for humans, earthworms, and honeybees and have larger VOC emissions than the fungicides used for powdery mildew , so the estimated intercept in Table 1.8 is larger in the San Joaquin Valley than in Napa and Sonoma counties and the state as a whole for groundwater, soil air, and pollinators. Powdery mildews in grapes are often treated with sulfur . In 2015, table, wine, and raisin grapes accounted for 77% of acreage treated with sulfur among all crops. To control powdery mildew, organic growers also rely on bio-ingredients such as Bacillus pumilus and Bacillus subtilis, which have larger VOC emissions than sulfur and mineral oils. Thus, organic wine grapes growers in Napa and Sonoma counties only achieve a 38% reduction in the PURE index for air while the reduction in the San Joaquin Valley is 45%.Using a consistent index, this essay quantifies the environmental impacts of pesticide use in conventional and organic fields and how they have changed over time. Information from this analysis could benefit organic crop production worldwide because California is an important production region with a diverse set of crops and environmental conditions. Previous studies rarely focused on the use of specific AIs or the change in the structure of pesticide use when evaluating the environmental impact of organic agriculture. To the best of my knowledge, the PUR database has never been used to compare pesticide use for conventional and organic production. The U.S. organic agriculture sector has grown significantly over the past two decades, after the launch of the NOP in 2002. Organic farming has the potential to continue to ex-pand in the future. Pesticides are essential for both conventional and organic crop production. However, pesticide use is not static. The pesticide portfolio changed dramatically for both farming systems in the study period. Based on field-level pesticide application information, this essay shows that the environmental impact of pesticide use on air increased in organic fields due to the adoption of new chemicals and the reduction in the use of sulfur, which has zero VOC emissions. Pesticides used in organic agriculture had lower environmental impacts per acre on surface water, groundwater, soil, air, blueberry box and pollinators depending on the pesticide portfolios for conventional and organic growers. However, the difference between two systems is decreasing over time for all five dimensions. Notably, they had almost the same level of VOC emissions in 2015. In both production systems, increases in growers’ total acreage were associated with increases in the environmental impacts of pesticide use in all dimensions. Increases in grower experience were associated with increases in the environmental impacts of pesticide use to surface water and groundwater, and decreases in the impacts on soil, air, and pollinators. The magnitude of effects of these two variables is smaller than the effect of the organic status of the field. Pesticide use in organic agriculture has evolved to have greater environmental impacts over time. This is consistent with findings in Läpple and Van Rensburg , who showed that late adopters, those who adopted organic farming after the launch of government supporting program, are more likely to be profit-driven and less likely to be environmentally concerned than early adopters. New policy instruments could alter the current situation.

When reviewing pesticide and fertilizer AIs used in organic agriculture, the NOSB could focus on environmental criteria such as VOC emissions, which has not been considered previously. Such policy instruments could partially offset the negative environmental impacts of pesticide used in organic fields. Whether organic farming is the most cost-effective way to reduce the environmental impacts of agriculture remains unclear because the changes in PURE index values does not directly translate to a one-dimensional environmental or food safety benefit that is comparable across commodities or farming methods. An alternative approach to reducing environmental impacts is to regulate pesticide use directly, which could have a significant cost. For example, the ban of methyl bromide was estimated to result in an annual revenue loss of $234 million and a 10% revenue loss for the strawberry industry in California . However, as the result shows, the PURE air index for strawberry did not decrease in conventional production after the ban. In addition, the groundwater index value increased because alternatives to methyl bromide have a greater impact on groundwater. In previous studies, demographic variables, such as gender and education, were shown to be determinants of the adoption of organic farming . Here, these characteristics are addressed by using time-invariant grower fixed effects. More information regarding the determinants of pesticide use decisions might be revealed if those characteristics data were available. Future research could focus on impacts on human health rather then the environment and cal-culate the monetary value of reduced mortality and morbidity of converting to organic production. And, estimating the value of improved environmental quality associated with organic agriculture, identified in this essay, is another research direction. While pesticide use remains important for both farming systems, another caveat is that this essay does not investigate the environmental impacts of non-chemical pest management practices, such as biological, cultural, and mechanical/physical controls. However, if one were to pursue that direction by collecting data on non-chemical practices, the analysis would necessarily be done on a relatively small scale, unlike the comprehensive data used here.Organic agriculture has been proposed as an essential part of sustainable food systems . In 2016, over 5 million acres of land were certified organic in the United States, which generated over $7.5 billion worth of agricultural products. California is the leading state as a producer of organic crops in the United States, accounting for 12% of organic cropland and 51% of crop sales value in 2016 . According to Willer and Lernoud , the United States is the largest market for organic products and accounted for 43% of global organic retail sales in 2017. Organic land use data for California have been collected for a limited number of years by two government agencies, the United States Department of Agriculture and the California Department of Food and Agriculture . Farm-level acreage and location information are not publicly available from either source. Detailed crop acreage data would facilitate further investigation of key topics such as the spatial distribution of organic fields, which previously could be studied only at a very small geographic scale using other data sources . In this context, California’s unique Pesticide Use Report database serves as an alternative source of very detailed and long-term data, which allows the identification of individual organic fields based on their historical pesticide use records. The PUR database contains information on all commercial agricultural pesticide use in California since 1990, including information on the chemicals used, crops and acreages for millions of individual applications. Pesticide use patterns for organic fields and their environmental impacts have not been studied previously. Existing studies often evaluate the environmental performance of organic agriculture as a system, rather than focusing on specific farming practices . To the best of my knowledge, no study has quantitatively described pesticide use in organic agriculture or assessed its environmental impacts for ecosystems on a large scale across numerous crops and over a long time period .

The trigger rate on thermal noise fluctuations changes drastically with threshold

Furthermore, if woody annual increments were considered this proportion would be even lower. Likewise the observed 1.7 Mg ha−1 in fruit represents ~14% of total biomass , which is within 10% of other studies in the region at similar vine densities. More importantly, this study reports the fraction of C that could be recovered from wine making and returned to the soil for potential long term storage. However, this study is restricted to the agronomic and environmental conditions of the site, and the methodology would require validation and potential adjustment in other locations and conditions. Few studies have conducted a thorough evaluation of below ground vine biomass in vineyards, although Elderfield did estimate that fine roots contributed 20–30% of total NPP and that C was responsible for 45% of that dry matter. More recently, Brunori et al. studied the capability of grapevines to efficiently store C throughout the growing season and found that root systems contributed to between 9 and 26% of the total vine C fixation in a model Vitis vinifera sativa L. cv Merlot/berlandieri rupestris vineyard. The results of our study provide a utilitarian analysis of C storage in mature wine grape vines, including above and below ground fractions and annual vs. perennial allocations. Such information constitutes the basic unit of measurement from which one can then estimate the contribution of wine grapes to C budgets at multiple scales— fruit, plant or vineyard level—and by region, sector, or in mixed crop analyses. Our study builds on earlier research that focused on the basic physiology, development and allocation of biomass in vines. Previous research has also examined vineyard-level carbon at the landscape level with coarser estimates of the absolute C storage capacity of vines of different ages, pe grow bag as well as the relative contribution of vines and woody biomass in natural vegetation in mixed vineyard-wild land landscapes.

The combination of findings from those studies, together with the more precise and complete carbon-by-vine structure assessment provided here, mean that managers now have access to methods and analytical tools that allow precise and detailed C estimates from the individual vine to whole-farm scales. As carbon accounting in vineyard landscapes becomes more sophisticated, widespread and economically relevant, such vineyard-level analyses will become increasingly important for informing management decisions. The greater vine-level measuring precision that this study affords should also translate into improved scaled-up C assessments . In California alone, for example, there are more than 230,000 ha are planted in vines. Given that for many, if not most of those hectares, the exact number of individual vines is known, it is easy to see how improvements in vine-level measuring accuracy can have benefits from the individual farmer to the entire sector. Previous efforts to develop rough allometric woody biomass equations for vines notwithstanding, there is still a need to improve our precision in estimating of how biomass changes with different parameters. Because the present analysis was conducted for 15 year old Cabernet vines, there is now a need for calibrating how vine C varies with age, varietal and training system. There is also uncertainty around the influence of grafting onto root stock on C accumulation in vines. As mentioned in the methods, the vines in this study were not grafted—an artifact of the root-limiting duripan approximately 50 cm below the soil surface. The site’s location on the flat, valley bottom of a river floodplain also means that its topography, while typical of other vineyard sites per se, created conditions that limit soil depth, drainage and decomposition. As such, the physical conditions examined here may differ significantly from more hilly regions in California, such as Sonoma and Mendocino counties. Similarly, the lack of a surrounding natural vegetation buffer at this site compared to other vineyards may mean that the ecological conditions of the soil communities may or may not have been broadly typical of those found in other vineyard sites.

Thus, to the extent that future studies can document the degree to which such parameters influence C accumulation in vines or across sites, they will improve the accuracy and utility of C estimation methods and enable viticulturists to be among the first sectors in agriculture for which accurate C accounting is an industry wide possibility. The current study was also designed to complement a growing body of research focusing on soil-vine interactions. Woody carbon reserves and sugar accumulation play a supportive role in grape quality, the main determinant of crop value in wine grapes. The extent to which biomass production, especially in below ground reservoirs, relates to soil carbon is of immediate interest for those focused on nutrient cycling, plant health and fruit production, as well as for those concerned with C storage. The soil-vine interface may also be the area where management techniques can have the highest impact on C stocks and harvest potential. We expect the below ground estimates of root biomass and C provided here will be helpful in this regard and for developing a more thorough understanding of below ground C stores at the landscape level. For example, Williams et al. estimated this component to be the largest reservoir of C in the vineyard landscape they examined, but they did not include root biomass in their calculations. Others have assumed root systems to be ~30% of vine biomass based on the reported biomass values for roots, trunk, and cordons. With the contribution of this study, the magnitude of the below ground reservoir can now be updated.Wine is a commodity of worldwide importance, and vineyards constitute a significant land use and contribution to economies across Mediterranean biome and beyond. Like orchards and tree plantations, grapevines are a perennial crop that stores C long-term in woody tissue, thereby helping to mitigate GHG emissions. Our study provides estimates of C in grape vines by vine component, as well as a simple measurement tool kit that growers can use to estimate the C in their vines and vineyard blocks. The equations presented here represent some of the first allometric models for estimating grapevine C from berries to blocks, with the hope that widespread use and refinement of these techniques may lead to recognition and credit for the C storage potential of vineyards and other perennial woody crops, such as orchards. The successful implementation of these methods, if applied widely to multiple cropping systems, could improve the precision of measurement and the understanding of C in agricultural systems relative to other human activities.Ultra-high-energy neutrino astronomy expands the opportunity to learn more about the fierce processes of astronomical objects. Neutrinos are ideal messengers because they have negligible mass, are neutral in charge, and, due to the fact that they only interact through the weak force, have a low interaction probability. Once created, these properties allow them to travel through space unhindered by intervening matter or radiation such as dust, gas, and electromagnetic fields. The same properties also make them challenging to detect. Even at the extreme energies relevant to radio neutrino detectors, neutrinos rarely interact with matter. When this feature is combined with the low expected fluxes, and stringent experimental upper limits have been published by the IceCube Collaboration, the detector architecture must incorporate large volumes of target material.

A rough estimate suggests that instrumented volumes must reach of order one teraton to observe a few neutrinos per year for commonly discussed theoretical models of neutrino production. Radio based neutrinos experiments have been successfully explored in the past with pilot arrays such as the ARA experiment and the ARIANNA experiment, the latter being the focal point of this paper. These efforts helped focus in on the radio techniques required to operate in extremely cold and harsh conditions. While these experiments showed the technical feasibility, they were too small to measure the low neutrino flux. Undeterred, several radio-based experiments in development are further illustrating the capabilities of this detection method, such as ARIANNA-200, the radio component of IceCube-Gen2 , the Radio Neutrino Observatory in Greenland, Giant Radio Array for Neutrino Detection, Taiwan Astroparticle Radiowave Observatory for Geo-synchrotron Emissions, growing bags and Payload for Ultrahigh Energy Observations, a successor to ANITA. These experiments exploit various target materials such as ice, water, mountains, and air. The challenge for experimenters is to reach the teraton detection volumes at a reasonable cost. One of the most promising methods for observing UHE neutrinos in large target volumes exploits radio detection in ice. For this reason, locations such as Greenland and Antarctica are popular sites for radio detection experiments. Ice is transparent to radio signals, with field attenuation lengths ranging from 0.5 km at Moore’s Bay to more than a kilometer in colder ice found at the South Pole or the Greenland ice sheet. Radio pulses are created via the Askaryan effect when interacting neutrinos create particles showers in ice, which in turn generate a time-varying negative charge excess that produces radio emission in the 50 MHz to 1 GHz range. The radio technique enables cost-efficient instrumentation for monitoring large detection volumes. However, because of the low flux of UHE neutrinos, event rates are still small even for the large array of hundreds of radio detector stations that is foreseen for the next-generation neutrino observatory at the South Pole, IceCube-Gen2. Thus, improving the sensitivity of the detector is one of the primary objectives. The easiest way to increase the sensitivity — but also the most expensive way — is to build more radio detector stations. A more efficient way is to increase the sensitivity of each radio detector station and a lot of work has been made towards this goal. The sensitivity can be increased by simply lowering the trigger threshold which records additional neutrino interactions that produce smaller signal strengths in the radio detector. The problem with this is that the trigger thresholds are already set close to the thermal noise floor such that the trigger rate is dominated by unavoidable thermal noise fluctuations. For example, an amplitude threshold trigger with a two out of four antenna coincidence logic has a trigger rate increases by about six orders-of-magnitude if the trigger threshold is lowered from four times the RMS noise, RMS noise, to just three time RMS noise. Therefore, the trigger threshold is limited by the maximum data rate a radio detector can handle which is typically on the order of 1 Hz if a high-speed communication link exists. If the communication relies on Iridium satellite communication, the maximum data rate is limited to 0.3 mHz. However, if thermal noise fluctuations are identified and rejected in real time, the trigger thresholds can be lowered while maintaining the same data rate, thus increasing the sensitivity of the detector. The sensitivity can be improved by up to a factor of two with the intelligent trigger system presented here . In this paper it is demonstrated that deep learning can be used to reject thermal noise in real time by implementing these techniques in the current ARIANNA data acquisition system. Deep learning, a modern rebranding of neural networks, has been shown to outperform other methods in a variety of scientific and engineering areas, including in physics. The significant amount of data that need to be classified in real time with low latency in high energy physics experiments makes deep learning an ideal tool to use. By rejecting thermal events, the trigger rate can be increased dramatically while maintaining the required low rate of event transmission over the communication links from the remotely located ARIANNA stations. Overall, lower thresholds increase the effective volume of ice observed by each station, which is proportional to the sensitivity of the detector. This paper is organized as follows. Additional details on the ARIANNA detector are provided, along with the expected gain in sensitivity for this study. Next the trade off between network efficiency and processing time is assessed to find the optimal deep learning models for a representative sample of microprocessor platforms. The deep learning method is then compared to a template matching study to determine how well the more common approach performs. Then the current ARIANNA data acquisition system is evaluated to determine the suitability for a deep learning filter. Moreover, the specific predictions for the optimal deep learning model are experimentally verified for the current microprocessor hardware. Lastly, the deep learning filter is tested on measured cosmic rays to verify that they are classified similar to neutrino signal and not rejected as thermal noise.

TSS concentrations are also affected by light and the vine water status

Light is generally not a factor because there is usually a large enough leaf area and sufficient light levels to saturate this source to sink relationship. Sun-exposed Cabernet Sauvignon berries in the vineyard had higher TSS than shaded berries. This sunlight effect was attributed largely to an increase in berry temperature rather than an increase in the fluence rate per se. A higher grapevine water status results in larger berry size and lower sugar concentrations and water deficit is known to increase sugar concentrations in Cabernet Sauvignon. However, temperature is thought to have the largest effect on sugar concentration. Other transcriptomic data in the present study indicated that BOD berries were more mature at a lower sugar level than RNO berries. These included the transcript abundance profiles of genes involved in autophagy, auxin and ABA signaling, iron homeostasis and seed development. Many of these DEGs had an accelerated rate of change in BOD berries. While these transcripts are in the skins, they may be influenced by signals coming from the seed. In addition, there was a higher transcript abundance for most genes involved with the circadian clock in BOD berries. PHYB can regulate the circadian clock and PHYB activity is very sensitive to night temperatures ; PHYB reversion is accelerated to the inactive form at warmer temperature. The inactivity of phytochrome promotes the expression of RVE1, which promotes auxin concentrations and seed dormancy. Thus, all things considered, plastic plant pot it is likely that temperature and/or the temperature differentials between day and night significantly contributed to the differences in the rate of berry development and sugar accumulation in the two locations.

Determining maturity of grapes is a difficult and error prone process. Reliable markers could aid in the decision of when to harvest the grapes. “Optimum” maturity is a judgement call and will ultimately depend on the winemaker’s or grower’s specific goals or preferences. A combination of empirical factors can be utilized including °Brix, total acidity, berry tasting in the mouth for aroma and tannins, seed color, etc. °Brix or total soluble solids by itself may not be the best marker for berry ripening as it appears to be uncoupled from berry maturity by temperature. Phenylpropanoid metabolism, including anthocyanin metabolism, is also highly sensitive to both abiotic and biotic stresses and may not be a good indicator of full maturity. Thus, color may not be a good indicator either. Specific developmental signals from the seed or embryo, such as those involved with auxin and ABA signaling, may provide more reliable markers for berry ripening in diverse environments, but will not be useful in seedless grapes. Aromatic compounds may also be reliable markers but they will need to be generic, developmental markers that are not influenced by the environment. This study revealed many genes that are not reliable markers because they were expressed differently in different environments. One candidate marker that is noteworthy is ATG18G . Its transcript abundance increased and was relatively linear with increasing °Brix and these trends were offset at the two locations relative to their level of putative fruit maturity . ATG18G is required for the autophagy process and maybe important during the fruit ripening phase. It was found to be a hub gene in a gene subnetwork associated with fruit ripening and chloroplast degradation. Further testing will be required to know if it is essential for fruit ripening and whether its transcript abundance is influenced by abiotic and biotic stresses in grape berry skins.The ultimate function of a fruit is to produce fully mature seeds in order to reproduce another generation of plants. The ripe berry exhibits multiple traits that signal to other organisms when the fruit is ready for consumption and seed dispersal.

In this study, we show that there were large differences in transcript abundance in grape skins in two different locations with different environments, confirming our original hypothesis. We also identified a set of DEGs with common profiles in the two locations. The observations made in this study provide lists of such genes and generated a large number of hypotheses to be tested in the future. WGCNA was particularly powerful and enhanced our analyses. The transcript abundance during the late stages of berry ripening was very dynamic and may respond to many of the environmental and developmental factors identified in this study. Functional analysis of the genes and GO enrichment analysis were very useful tools to elucidate these factors. Some of the factors identified were temperature, moisture, light and biotic stress. The results of this study indicated that berries still have a “sense of place” during the late stages of berry ripening. Future studies are required to follow up on these observations. It appears that fruit ripening is very malleable. Manipulation of the canopy may offer a powerful lever to adjust gene expression and berry composition, since these parameters are strongly affected by light and temperature.Doorbells have been playing an important role in protecting the security of modern homes since they were invented. A doorbell allows visitors to announce their presence and request entry into a building as well as enables the occupant to verity the identity of the guests to help prevent home robbery or invasion at a moment’s notice. There are two types of doorbells depending on the requirement of wall wiring: the wired doorbells and the wireless doorbells. The former requires a wire to connect both the front door button and the back door button to a transformer, while the latter transfer the signal wirelessly using telephone technology. Modern buildings are typically equipped with wireless doorbell systems that employ radio technology to signal doorbells and answer the doors remotely.

Although these doorbells are much more convenient than wired ones, they do not always satisfy the demands of modern homes for the following three reasons. First, the answering machines are normally located at a fixed place , if a occupant wants to answer the doorbell, he/she has to go to the answering machines. Second, if the occupant would like to see the visitors outside, he/she has to go to door. Third, the occupant has no way to answer or admit guests when he/she is not at home, nor to keep a record of guests. As smart home technology matures, smart doorbells can solve this problem greatly by connecting the doorbells to the Internet and allowing users to answer the bell through a smart device such as a smartphone or tablet. This enables a home owner to answer and admit a visitor anywhere when a smart device connecting to the Internet is available. However, such smart doorbells are quite expensive due to technical and manufacturing difficulties. The high prices make these products unavailable to most home users with limited budgets, nursery pots hindering the pervasiveness of smart doorbells. This is confirmed by a research shows that less than 4% of U.S. households have a smart doorbell system to protect the security of the homes. To solve this problem, we introduce the Dashbell- a low-cost smart doorbell system for home use. The doorbell system uses a cheap, WiFi-enabled device-the Amazon Dash Button to serve as the doorbell, and connects it to the Internet, allowing users to answer the bell anywhere using a smart device such as a smartphone or a tablet. With such as solution, users may purchase a smart doorbell system at a price as low as 40 US dollars, which significantly increases the affordability of the smart doorbells.A smart doorbell is an integral part of a smart home, which helps protect the security of the home by avoiding unwanted access such as robbery and invasion. The controller of the smart home can potentially answer the bell and decide whether to admit a visitor outside the door or not through adaptive learning and other technologies. Because of the important role that smart doorbells play on building a smart home, many techniques and methodologies have been invented during past few years. The existing smart doorbells provide an integrated solution, which means that the working mechanisms or the implementation details are hidden and unknown to the users. If there is a failure, users have to seek help from professionals for repairs or maintenance. It is also very likely that users need to replace the whole smart doorbells due to a failure of a component in the system.Amazon Dash Button is a WiFi-enabled device that allows consumers to reorder frequently used daily products like trash bags, toilet towels or refill blades by pressing a button. A dash button can be purchased via online and costs 4.99 US dollars . Recently, it has been found that the dash button can be tweaked to track baby habits. We employ this feature of the dash button and use it as a doorbell of the Dashbell system. Alternatives to Dash Button include portable door bell kits, and wireless door chime trigger with motion sensors, both of which can be purchased for less than 5 US dollars.Modern homes are typically equipped with Wi-Fi routers and have access to the Internet. Smartphones are also highly available to the majority of population. To build a budget smart doorbell system like Dashbell, the user only needs to purchase a Amazon Dash Button , a Raspberry Pi , a webcam , and a buzzer.

The user can sign up to request for free Amazon Web Service. The total cost of a Dashbell system is less than 40 US dollars. The Dashbell system differentiates existing smart doorbell systems in the following aspects. First, Dashbell is much cheaper than existing smart doorbells. Second, Dashbell is a distributed system rather than an integrated one, which enables faster fault detection and diagnosis. For instance, if some of the components fail to operate, one can just identify and fix or replace the parts by checking each individual device instead of disassembling or replacing the whole doorbell system. Third, given that most smart doorbell devices are expensive devices, they can be potentially detached and stolen. However, with the Dashbell, only the Dash button, which is inexpensive and replaceable, needs to be placed outside the home, making it a much better alternative in terms of the device’s own security. Lastly, unlike exisiting smart doorbells, which are only sold in limited places and through particular channels, the components of the Dashbell system are highly available. While the Dashbell provides several useful features and enhanced security over a conventional doorbell, there are a few security and privacy issues associated with it. Since the device is connected via a home WiFi network, it is possible to compromise the network and use the device, grant access to unauthorized visitors or collect data using it without the owners consent. We advise users to keep their network secured with a password. Our system also takes pictures of visitors without their consent and stores them on the server. Since this is personally identifiable information, we have made sure the server and all communication channels are secure. Using secure communication channels and encrypting user data while storing on the server would be helpful in this regard. The mobile application also has an additional layer of security so that only owner can grant access to a visitor.The ability of a genotype to produce different phenotypes as a function of environmental cues is known as phenotypic plasticity . Phenotypic plasticity is considered one of the main processes by which plants, as sessile organisms, can face and adapt to the spatio-temporal variation of environmental factors . Grapevine berries are characterized by high phenotypic plasticity and a genotype can present variability within berries, among berriesin a cluster, and among vines . Berry phenotypic traits, such as the content of sugars, acids, phenolic, anthocyanins, and flavor compounds, are the result of cultivar and environmental influences , and often strong G × E interactions . Although grapevine plasticity in response to environmental conditions and viticulture practices may provide advantages related to the adaptation of a cultivar to specific growing conditions, it may also cause irregular ripening and large inter-seasonal fluctuations , which are undesirable characteristics for wine making . Due to its complex nature, the study of phenotypic plasticity is challenging and the mechanisms by which the genes affecting plastic responses operate are poorly characterized . In fact it is often difficult to assess the performance of different phenotypes in different environments . It has been suggested that genetic and epigenetic regulation of gene expression might be at the basis of phenotypic plasticity through the activation of alternative gene pathways or multiple genes .

Limited functionality makes End Devices the least power hungry network nodes

ZigBee End Devices are not fully functional devices, they send information only to their parent node, which can be either a Router or Coordinator. Generally, End Devices form the bulk of sensor nodes in a network structure. In order to facilitate long battery life, ZigBee protocols allow for an easily programmable sleep mode for End Devices, periodically limiting computation and thus power consumption. Coordinator and Router nodes continually transmit beacon signals to alert child nodes of their presence. Upon waking, End Devices wait for a beacon signal and relay information before continuing their sleep cycle.There are two types of UART schemes available for Xbee modules, AT and API mode. AT mode is synonymous with transparent mode, each module has a single network destination and personal address. Configuration of network parameters must be done through command mode, either by the user or a micro controller. AT mode is useful for simple point-to-point communication or non-modular network topology. API mode is recommended for larger networks with more complicated overall topologies. Messages contain a personal and destination address, plastic flower buckets wholesale which can be changed during run time depending on message type, making mesh topology simple to implement.

Additionally, API mode contains packet delivery confirmation messages and has the option of escape parameters.Grapevine berry ripening can be divided into three major stages. In stage 1, berry size increases sigmoidally. Stage 2 is known as a lag phase where there is no increase in berry size. Stage 3 is considered the ripening stage. Veraison is at the beginning of the ripening stage and is characterized by the initiation of color development, softening of the berry and rapid accumulation of the hexoses, glucose and fructose. Berry growth is sigmoidal in Stage 3 and the berries double in size. Many of the flavor compounds and volatile aromas are derived from the skin and synthesized at the end of this stage. Many grape flavor compounds are produced as glycosylated, cysteinylated and glutathionylated precursors and phenolics and many of the precursors of the flavor compounds are converted to various flavors by yeast during the fermentation process of wine. Nevertheless, there are distinct fruit flavors and aromas that are produced and can be tasted in the fruit, many of which are derived from terpenoids, fatty acids and amino acids. Terpenes are important compounds for distinguishing important cultivar fruit characteristics. There are 69 putatively functional, 20 partial and 63 partial pseudogenes in the terpene synthase family that have been identified in the Pinot Noir reference genome. Terpene synthases are multi-functional enzymes using multiple substrates and producing multiple products.

More than half of the putatively functional terpene synthases in the Pinot Noir reference genome have been functionally annotated experimentally and distinct differences have been found in some of these enzymes amongst three grape varieties: Pinot Noir, Cabernet Sauvignon and Gewürztraminer. Other aromatic compounds also contribute significant cultivar characteristics. C13-norisoprenoids are flavor compounds derived from carotenoids by the action of the carotenoid cleavage dioxygenase enzymes . Cabernet Sauvignon, Sauvignon Blanc and Cabernet Franc are characterized by specific volatile thiols and methoxypyrazines. Enzymes involved in the production of these aromas have been recently characterized. Phenolic compounds play a central role in the physical mouthfeel properties of red wine; recent work relates quality with tannin levels. While the grape genotype has a tremendous impact on tannin content, the environment also plays a very large role in grape composition. The pathway for phenolic biosynthesis is well known, but the mechanisms of environmental influence are poorly understood. Ultimately, there is an interaction between molecular genetics and the environment. Flavor is influenced by climate, topography and viticultural practices. For example, water deficit alters gene expression of enzymes involved in aroma biosynthesis in grapes, which is genotype dependent, and may lead to increased levels of compounds, such as terpenes and hexyl acetate, that contribute to fruity volatile aromas. The grapevine berry can be subdivided into the skin, pulp and seeds. The skin includes the outer epidermis and inner hypodermis . A thick waxy cuticle covers the epidermis.

The hypodermal cells contain chloroplasts, which lose their chlorophyll at veraison and become modified plastids; they are the sites of terpenoid biosynthesis and carotenoid catabolism. Anthocyanins and tannins accumulate in the vacuoles of hypodermal cells. Pulp cells are the main contributors to the sugar and organic acid content of the berries. Pulp cells also have a much higher set of transcripts involved in carbohydrate metabolism, but a lower set of transcripts involved in lipid, amino acid, vitamin, nitrogen and sulfur metabolism than in the skins. Hormones can influence berry development and ripening. Concentrations of auxin, cytokinins and gibberellins tend to increase in early fruit development of the first stage. At veraison, these hormone concentrations have declined concomitant with a peak in abscisic acid concentration just before veraison. Auxin prolongs the Stage 2 lag phase and inhibits anthocyanin biosynthesis and color development in Stage 3. Grapevine, a non-climacteric fruit, is not very sensitive to ethylene; however, ethylene appears to be necessary for normal fruit ripening. Ethylene concentration is highest at anthesis, but declines to low levels upon fruit set; ethylene concentrations rise slightly thereafter and peak just before veraison then decline to low levels by maturity. Ethylene also plays a role in the ripening of another non-climacteric fruit, strawberry. ABA also appears to be important in grape berry ripening during veraison when ABA concentrations increase resulting in increased expression of anthocyanin biosynthetic genes and anthocyanin accumulation in the skin. ABA induces ABF2, a transcription factor that affects berry ripening by stimulating berry softening and phenylpropanoid accumulation. In addition, ABA affects sugar accumulation in ripening berries by stimulating acid invertase activity and the induction of sugar transporters. It is not clear whether ABA directly affects flavor volatiles , but there could be indirect effects due to competition for common precursors in the carotenoid pathway. Many grape berry ripening studies have focused on targeted sampling over a broad range of berry development stages, but generally with an emphasis around veraison, when berry ripening is considered to begin. In this study, a narrower focus is taken on the late ripening stages where many berry flavors are known to develop in the skin. We show that that the abundance of transcripts involved in ethylene signaling is increased along with those associated with terpenoid and fatty acid metabolism, particularly in the skin.Cabernet Sauvignon clusters were harvested in 2008 from a commercial vineyard in Paso Robles, California at various times after veraison with a focus on targeting °Brix levels near maturity. Dates and metabolic details that establish the developmental state of the berries at each harvest are presented in Additional file 1. Berries advanced by harvest date with the typical developmental changes for Cabernet Sauvignon: decreases in titratable acidity and 2- isobutyl-3-methoxypyrazine concentrations and increases in sugar and color . Transcriptomic analysis focused on four harvest dates having average cluster °Brix levels of 22.6, 23.2, 25.0 and 36.7. Wines made in an earlier study from grapes harvested at comparable levels of sugars or total soluble solids to those in the present study showed clear sensory differences. Six biological replicates, black flower buckets comprising two clusters each, were separated into skins and pulp in preparation for RNA extraction and transcriptomic analysis using the NimbleGen Grape Whole-Genome Microarray. Thus, a 4 × 2 factorial experimental design was established. A note of caution must be added here. There are high similarities amongst members in certain Vitis gene families , making it very likely that cross-hybridization can occur with probes on the microarray with high similarity to other genes.

We estimate approximately 13,000 genes have the potential for cross-hybridization, with at least one probe of a set of four unique probes for that gene on the microarray potentially cross-hybridizing with probes for another gene on the microarray. Genes with the potential for cross hybridization have been identified and are highlighted in light red in Additional file 2. The rationale to include them is that although individual genes can not be uniquely separated, the probe sets can identify a gene and its highly similar gene family members, thus, providing some useful information about the biological responses of the plant. An additional approach was taken, removing cross-hybridizing probes before quantitative data analysis . Many of the significant genes were unaffected by this processing, but 3600 genes were completely removed from the analysis. Thus, it was felt that valuable information was lost using such a stringent approach. The less stringent approach allowing for analysis of genes with potential cross hybridization was used here in the rest of the analyses. To assess the main processes affected by these treatments, the gene ontologies of significantly affected transcripts were analyzed for statistical significance using BinGO. Based on transcripts that had significant changes in abundance with °Brix level, 230 biological processes were significantly over represented in this group . The three top over represented processes were response to abiotic stress, bio-synthetic process, and response to chemical stimulus, a rather generic set of categories. Tissue differences were more revealing at the stage when flavors peak; 4865 transcripts that were significantly higher in skins compared to pulp at 23.2 °Brix were tested for over represented GO functional categories . Some of the top GO categories included photosynthesis, isoprenoid biosynthesis, and pigment biosynthesis . Some of the transcripts with the largest differences between skin and pulp at 23.2 °Brix are β-ketoacyl-CoA synthase , taxane 10-β-hydroxylase , wax synthase, a lipase, an ABC transporter, and phenylalanine ammonia-lyase . The abundance of 5716 transcripts was significantly higher in pulp than skin at 23.2 °Brix . Some of the top GO categories over represented were a variety of transport processes and small GTPase mediated signal transduction . Some of the transcripts with the largest differences in abundance with pulp greater than skin at 23.2 °Brix were polygalacturonase , flavonol synthase, stachyose synthase, an amino acid transporter, a potassium channel , and HRE2 . The transcript abundance of 2053 genes had significantly differential expression across °Brix levels and tissues . The top GO categories over represented in this set involved photosynthesis and phenylpropanoid metabolism, both associated with the berry skin . Other flavorcentric categories of the 57 categories over represented include aromatic compound biosynthesis, fatty acid metabolism and alcohol catabolism. This transcript set was further analyzed by dividing into 10 clusters using k-means clustering . The over represented GO categories were determined for each cluster . Eight of the 10 clusters had distinct over represented GO categories; two clusters did not have any over represented GO categories, meaning that the genes in these two clusters were assigned to GO categories of expected proportions when compared to the entire NimbleGen array. Clusters 1, 8, 9 and 10 had a large number of over represented categories. Many GO categories within a cluster are subsets of others in that cluster and were grouped together. For example, cluster 4 had four over represented GO categories, oxygen transport, gas transport, heat acclimation and response to heat. The four categories could be grouped into two, as two are subsets of the others; this is how they were listed in Table 1.To properly annotate the AP2/ERF super family of Vitis vinifera according to the IGGP Supernomenclature committee instructions, a phylogenetic tree was generated for the AP2/ERF super family of Arabidopsis thaliana and Vitis vinifera using the TAIR 10 and V1 gene models, respectively . The labeled family classifications were derived from the Arabidopsis naming scheme by Nakano et al.. There are 130 members in the VitisAP2/ERF super family in the Pinot Noir reference genome. However, the six paralogs of ERF6 discussed above belong to a Vitis vinifera clade in subfamily IX and are distinctly different or separate from any Arabidopsis subfamily IX ERF TFs . All of these TFs in this clade are orthologs of AtERF6. VviERF6L1 [UniProt: F6I2N8; VIT_16s0013g00900] had one of the most interesting profiles of the 12 members of this clade because its transcript abundance peaked at 23.2 °Brix . Using k-means clustering, VviERF6L1 fell within Cluster 8 with 369 transcripts, including five additional VviERF6 paralogs. The top GO categories associated with Cluster 8 were genes associated with terpenoid metabolism and pigment biosynthesis . Other interesting flavor associated categories included fatty acid and alcohol metabolism . Representative transcripts from Cluster 8 that were correlated with the transcript abundance profile of VviERF6L1 can be seen in Figure 4.

It is hard to exaggerate the significance of this problem to the study of moir´e super lattices

As shown in Figs. 5.14 and 5.15, more subtle features of the transport curve can also be associated with the reversal of domains that do not bridge contacts. In the absence of significant magnetic disorder ferromagnetic domain walls minimize surface tension. In two dimensions, domain walls are pinned geometrically in devices of finite size with convex internal geometry. As discussed in Fig. 5.15, we observe pinning of domain walls at positions that do not correspond to minimal length internal chords of our device geometry–suggesting that magnetic order couples to structural disorder directly. This is corroborated by the fact that the observed domain reversals associated with the Barkhausen jumps are consistent over repeated thermal cycles between cryogenic and room temperature. Together, these findings suggest a close analogy topolycrystalline spin ferromagnets, which host ferromagnetic domain walls that are strongly pinned to crystalline grain boundaries ; indeed, these crystalline grains are responsible for Barkhausen noise as it was originally described. Although crystalline defects on the atomic scale are unlikely in tBLG thanks to the high quality of the constituent graphene and hBN layers, the thermodynamic instability of magic angle twisted bilayer graphene makes it highly susceptible to inhomogeneity at scales larger than the moir´e period, as shown in prior spatially resolved studies. For example, procona valencia buckets the twist angle between the layersas well as their registry to the underlying hBN substrate may all vary spatially, providing potential pinning sites.

Moir´e disorder may thus be analogous to crystalline disorder in conventional ferromagnets, which gives rise to Barkhausen noise as it was originally described. A subtler issue raised by our data is the density dependence of magnetic pinning; as shown in Fig. 5.3, Bc does not simply track 1/m across the entire density range, in particular failing to collapse with the rise in m in the Chern magnet gap. This suggests nontrivial dependence of either the pinning potential or the magnetocrystalline anisotropy energy on the realized many body state. Understanding the pinning dynamics is critical for stabilizing magnetism in tBLG and the growing class of related orbital magnets, which includes both moir´e systems as well as more traditional crystalline systems such as rhombohedral graphite. In order to understand the microscopic mechanism behind magnetic grain boundaries in the Chern magnet phase in tBLG/hBN, we used nanoSQUID magnetometry to map the local moir´e super lattice unit cell area, and thus the local twist angle, in this device, using techniques discussed in the literature. This technique involves applying a large magnetic field to the tBLG/hBN device and then using the chiral edge state magnetization of the Landau levels produced by the gap between the moir´e band and the dispersive bands to extract the electron density at which full filling of the moir´e super lattice band occurs . The strength of this Landau level’s magnetization can be mapped in real space , and the density at which maximum magnetization occurs can be processed into a local twist angle as a function of position . It was noted in that the moir´e super lattice twist angle distribution in tBLG is characterized by slow long length scale variations interspersed with thin wrinkles, across which the local twist angle changes rapidly. These are also present in the sample imaged here .

The magnetic grain boundaries we extracted by observing the domain dynamics of the Chern magnet appear to correspond to a subset of these moir´e super lattice wrinkles. It may thus be the case that these wrinkles serve a function in moir´e super lattice magnetism analogous to that of crystalline grain boundaries in more traditional transition metal magnets, pinning magnetic domain walls to structural disorder and producing Barkhausen noise in measurements of macroscopic properties.In tBLG, a set of moir´e subbands is created through rotational misalignment of a pair of identical graphene monolayers. In twisted monolayer-bilayer graphene a set of moir´e subbands is created through rotational misalignment of a graphene monolayer and a graphene bilayer. These systems both support Chern magnets. Both systems are also members of a class of moir´e super lattices known as homobilayers; in these systems, the 2D crystals forming the moir´e super lattice share the same lattice constant, and the moir´e super lattice appears as a result of rotational misalignment, as illustrated in Fig. 5.17A. Homobilayers have many desirable properties; the most important one is that the twist angle can easily be used as a variational parameter for minimizing the bandwidth of the moir´e subbands, producing the so-called ‘magic angle’ tBLG and tMBG systems. Homobilayers do, however, have some undesirable properties. Although local variations in electron density are negligible in these devices, the local filling factor of the moir´e super lattice varies with the moir´e unit cell area, and thus with the relative twist angle. The tBLG moir´e super lattice is shown for two different twist angles in 5. B-C across the magic angle regime; it is clear that the unit cell area couples strongly to twist angle in this regime, illustrating the sensitivity of these devices to twist angle disorder. The relative twist angle of the two crystals in moir´e super lattice devices is never uniform. Imaging studies have clearly shown that local twist angle variations provide the dominant source of disorder in tBLG .

Phenomena discovered in tBLG devices are notoriously difficult to replicate. Orbital magnetism at B = 0 has only been realized in a handful of tBLG devices, and quantization of the anomalous Hall resistance has only been demonstrated in a single tBLG device, in spite of years of sustained effort by several research groups. A mixture of careful device design limiting the active area of devices and the use of local probes has allowed researchers to make many important discoveries while sidestepping the twist angle disorder issue- indeed, some exotic phases are known in tBLG only from a single device, or even from individual scanning probe experiments- but if the field is ever to realize sophisticated devices incorporating these exotic electronic ground states the problem needs to be addressed.There is another way to make a moir´e super lattice. Two different 2D crystals with different lattice constants will form a moir´e super lattice without a relative twist angle; these systems are known as heterobilayers . These systems do not have ‘magic angles’ in the same sense that tBLG and tMBG do, and as a result there is no meaningful sense in which they are flat band systems, but interactions are so strong that they form interaction-driven phases at commensurate filling of the moir´e super lattice anyway. Indeed, many of the interaction-driven insulators these systems support survive to temperatures well above 100 K. The most important way in which heterobilayers differ from homobilayers, however, is in their insensitivity to twist angle disorder. In the small angle regime, the moir´e unit cell area of a heterobilayer is almost completely independent of twist angle, as illustrated in 5.17E-F. A new intrinsic Chern magnet was discovered in one of these systems, a heterobilayer moir´e super lattice formed through alignment of MoTe2 and WSe2 monolayers. The researchers who discovered this phase measured a well-quantized QAH effect in electronic transport in several devices, procona buckets demonstrating much better repeatability than was observed in tBLG. The unit cell area as a function of twist angle is plotted for three moir´e super lattices that support Chern insulators in 5.17G, with the magic angle regime highlighted for the homobilayers, demonstrating greatly diminished sensitivity of unit cell area to local twist angle in the heterobilayer AB-MoTe2/WSe2. MoTe2/WSe2 does have its own sources of disorder, but it is now clear that the insensitivity of this system to twist angle disorder has solved the replication issue for Chern magnets in moir´e super lattices. Dozens of MoTe2/WSe2 devices showing well-quantized QAH effects have now been fabricated, and these devices are all considerably larger and more uniform than the singular tBLG device that was shown to support a QAH effect, and was discussed in the previous chapters. The existence of reliable, high-yield fabrication processes for repeatably realizing uniform intrinsic Chern magnets is an important development, and this has opened the door to a wide variety of devices and measurements that would not have been feasible in tBLG/hBN.The basic physics of this electronic phase differs markedly from the systems we have so far discussed, and we will start our discussion of MoTe2/WSe2 by comparing and contrasting it with graphene moir´e super lattices. In tBLG/hBN and its cousins, valley and spin degeneracy and the absence of significant spin-orbit coupling combine to make the moir´e subbands fourfold degenerate. When inversion symmetry is broken the resulting valley subbands can have finite Chern numbers, so that when the system forms a valley ferromagnet a Chern magnet naturally appears.

Spin order may be present but is not necessary to realize the Chern magnet; it need not have any meaningful relationship with the valley order, since spin-orbit coupling is absent. MoTe2/WSe2 has strong spin-orbit coupling, and as a result, the spin order is locked to the valley degree of freedom. This manifests most obviously as a reduction of the degeneracy of the moir´e subbands; these are twofold degenerate in MoTe2/WSe2 and all other TMD-based moir´e super lattices. The closest imaginable analog of the tBLG/hBN Chern magnet in this system is one in which interactions favor the formation of a valley-polarized ferromagnet, at which point the finite Chern number of the valley subbands would produce a Chern magnet. This was widely assumed to be the case at the time of the system’s discovery. There is now substantial evidence that this system instead forms a valley coherent state stabilized by its spin order, which would require a new mechanism for generating the Berry curvature necessary to produce a Chern magnet. In general I think it is fair to say that the details of the microscopic mechanism responsible for producing the Chern magnet in this system are not yet well understood. In light of the differences between these two systems, there was no particular reason to expect the same phenomena in MoTe2/WSe2 as in tBLG/hBN. As will shortly be explained, current-switching of the magnetic order was indeed found in MoTe2/WSe2. The fact that we find current-switching of magnetic order in both the tBLG/hBN Chern magnet and the AB-MoTe2/WSe2 Chern magnet is interesting. It may suggest that the phenomenon is a simple consequence of the presence of a finite Chern number; i.e., that it is a consequence of a local torque exerted by the spin/valley Hall effect,which is itself a simple consequence of the spin Hall effect and finite Berry curvature. These ideas will be discussed in the following sections.In spin torque magnetic memories, electrically actuated spin currents are used to switch a magnetic bit. Typically, these require a multi-layer geometry including both a free ferromagnetic layer and a second layer providing spin injection. For example, spin may be injected by a nonmagnetic layer exhibiting a large spin Hall effect, a phenomenon known as spin-orbit torque. Here, we demonstrate a spin-orbit torque magnetic bit in a single two-dimensional system with intrinsic magnetism and strong Berry curvature. We study AB-stacked MoTe2/WSe2, which hosts a magnetic Chern insulator at a carrier density of one hole per moir´e super lattice site. We observe hysteretic switching of the resistivity as a function of applied current. Magnetic imaging reveals that current switches correspond to reversals of individual magnetic domains. The real space pattern of domain reversals aligns with spin accumulation measured near the high Berry curvature Hubbard band edges. This suggests that intrinsic spin or valley Hall torques drive the observed current-driven magnetic switching in both MoTe2/WSe2 and other moir´e materials. The switching current density is significantly less than those reported in other platforms, suggesting moir´e heterostructures are a suitable platform for efficient control of magnetic order. To support a magnetic Chern insulator and thus exhibit a quantized anomalous Hall effect, a two dimensional electron system must host both spontaneously broken time-reversal symmetry and bands with finite Chern numbers. This makes Chern magnets ideal substrates upon which to engineer low-current magnetic switches, because the same Berry curvature responsible for the finite Chern number also produces spin or valley Hall effects that may be used to effect magnetic switching. Recently, moir´e heterostructures emerged as a versatile platform for realizing intrinsic Chern magnets. In these systems, two layers with mismatched lattices are combined, producing a long-wavelength moir´e pattern that reconstructs the single particle band structure within a reduced super lattice Brillouin zone.

The scotch tape piece is then folded over onto itself and subsequently ripped apart

Measuring a magnetic field with a SQUID does not require optical access; many other magnetic field measurement techniques do. Together, these facts mean that scanning SQUIDs are often the best tools available for probing extremely low temperature phenomena. NanoSQUID sensors also have many advantages over planar SQUIDs. The most obvious, of course, has already been discussed, and that is their higher spatial resolution. A less obvious advantage- indeed, an advantage that became clear only after the first nanoSQUID sensors were fabricated and tested- is the geometry of the thin superconducting contacts, which under normal circumstances are aligned with the axis of the applied magnetic field. Large magnetic fields tend to destroy superconducting phases, so superconducting devices are all limited by the maximum magnetic fields at which they can operate. This so-called critical field HC is not an intensive property; there is a large-size limit that can be measured and tabulated for different materials, but the critical field of an individual piece of superconductor is a strong function of geometry. A thin superconducting film in the plane of an applied magnetic field can accommodate much higher magnetic field magnitudes than can be accomodated by a large piece of the same superconductor. The bulk limit for lead at low temperature is about 80 mT; we routinely make lead nanoSQUIDs that can survive magnetic fields of 1 T, plastic planters wholesale and we have on occasion made nanoSQUIDs that can survive magnetic fields above 2 T.

It turns out that many of the most useful magnetic imaging techniques are limited to low field operation. This thesis will focus primarily on low field phenomena, but there are also many magnetic phenomena that require high magnetic fields to appear, including the quantum Hall effect and a variety of magnetic phase transitions. The nanoSQUID technique is useful for studying these as well.Above about 0.2 T, superconductivity begins to rapidly degrade in the nanoSQUID sensor, destroying the sensitivity of the sensor and rendering it useless as a sensor of magnetic field. This limits this particular nanoSQUID to operation in the regime 0.2T < B < 0.2T. This is fairly general to indium nanoSQUIDs; their precise critical fields vary, but are generally considerably below those of lead nanoSQUID sensors. As with any sensor, measurements with nanoSQUIDs are contaminated with noise, and the dependence of that noise on SQUID bias and magnetic field can be characterized. A characterization of the noise spectrum of the indium nanoSQUID shown in Fig. 1.6A is shown in Fig. 1.6B. In nanoSQUID sensors, local maxima in critical current are often associated with high noise and thus low magnetic field sensitivity . This produces ‘blind spots’ in magnetic field for nanoSQUID sensors. These blind spots often in practice include B = 0, making true zero-field operation challenging for nanoSQUID sensors. Technologies exist for circumventing this issue[54], but in practice we mostly work around it.

The low magnetic field DC response of a lead nanoSQUID is shown in Fig. 1.6C. A higher magnetic field characterization is shown in Fig. 1.6D, illustrating the collapse of superconductivity in this nanoSQUID at a considerably higher magnetic field of about 0.75 T. The inventor of the technique has been active in developing ways to deposit other materials onto micropipettes for use as nanoSQUID sensors, and has succeeded in producing aluminum, niobium, tin, and alloyed molybdenum/rhenium nanoSQUIDs, in addition of course to indium and lead nanoSQUIDs. The MoRe nanoSQUIDs in particular are capable of operating in extremely large ambient magnetic fields, up to about 5T. The magnetic field noise floor of nanoSQUID sensors seems to vary for different materials as well. We do not have a strong model explaining why this is the case, but it is empirically true that indium and lead nanoSQUIDs have particularly low noise floors. Plots illustrating the dependence on magnetic field of the magnetic field sensitivity of a lead nanoSQUID sensor 80 nm in diamater are shown in Fig. 1.7. NanoSQUID sensors have some unique disadvantages as well. Like planar SQUIDs, nanoSQUIDs require superconductivity to function, which limits them to fairly low operating temperatures. In planar SQUIDs it is often possible to keep the SQUID itself cold while scanning over a much hotter sample, but nanoSQUID sensors are extremely poorly thermalized to their scan heads, which means that they generally are thermalized either to the surface over which they are scanning or to the black body spectrum of the vessel in which they are contained .

This gives nanoSQUID sensors some interesting capabilities, namely that under the right conditions they can function as extremely sensitive scanning probe thermometers, but it also comes with some drawbacks. NanoSQUIDs composed of superconductors with critical temperatures below 4.2 K, the boiling point of helium-4 at atmospheric pressure, must thus have actively cooled thermal radiation shields to operate in very high vacuum, and of course imaging of hot samples is completely out of the question for these sensors. A variety of exciting opportunities exist for the application of sensitive magnetic imaging techniques to biological systems, and this is not a realistic option for nanoSQUID sensors. NanoSQUIDs are quite fragile and can be easily destroyed by vibrations, necessitating vibration isolation systems, and the superconducting film on the apex of the micropipette is quite thin, typically between 15 and 20 nm, so superconducting materials that oxidize in air will be quickly degraded. Thankfully indium and lead do not oxidize rapidly, but they do oxidize at a finite rate, so nanoSQUIDs composed of these materials only last for a few days when left in air. Storage in high vacuum can improve their lifespan, but generally not indefinitely. In summary, scanning probe microscopes fitted with nanoSQUID sensors can function as magnetometry microscopes with 30-250 nm resolution. They are capable of operating at very low temperatures and magnetic fields of up to several Tesla. Their high sensitivities allow them to detect the minute magnetic fields emitted by electronic phases composed entirely of electrons forced into a two dimensional heterostructure with an electrostatic gate. We will discuss some of the properties of two dimensional heterostructures next.Many crystalline compounds have cleavage planes; that is, planes along which cracks propagate most readily. When such compounds are stressed beyond their yield strength, they tend to break up into pieces with characteristic shapes that inherit the anisotropy of the chemical bonds forming the crystal out of which they are composed. Indeed, this observation was a compelling piece of early evidence for the existence of crystallinity, and even atoms themselves. There exists a class of materials withcovalent bonds between unit cells in a two dimensional plane and much weaker van der Waals bonds in the out-of-plane direction, producing extraordinarily strong chemical bond anisotropy. In these materials, known as ‘van der Waals’ or ‘two dimensional’ materials, this anisotropy produces cleavage planes that tend to break bulk crystals up into two dimensional planar pieces. Exfoliation is theprocess of preparing a thin piece of such a crystal through mechanical means. In some of these materials, the chemical bond anistropy is so strong that it is possible to prepare large flakes that are atomically thin . These two dimensional crystals have properties quite different from their bulk counterparts. They do have a set of discrete translation symmetries, plastic plant pot which makes them crystals, but they only have these symmetries along two axes- there is no sense in which a one-atom-thick crystal has any out-of-plane translation symmetries. For this reason they have band structures that differ markedly from their three dimensional counterparts. A variety of techniques have been developed for preparing atomically thin flakes from van der Waals materials, but by far the most successful has been scotch tape exfoliation. In this process, a chunk of a van der Waals crystal is placed on a piece of scotch tape. This separates the chunk of van der Waals crystal along its cleavage planes into two pieces of comparable size on opposite sides of the piece of tape. This process is repeated several times, further dividing the number of atomic layers in each chunk with each successive repetition. If we assume we are dividing the number of atomic layers in each piece roughly in half with each round, after N repetitions the resulting crystals should have thicknesses reduced by a factor of 2 1 N .

This is enough to reduce each flake to atomic dimensions after a small number of repetitions of the process. This process is, of course, self-limiting; once the flakesreach atomic dimensions, they cannot be further subdivided. Graphite can be exfoliated through this process into flakes one or a few atoms thick and dozens of microns in width, a width-to-thickness aspect ratio of 105 . As previously discussed, this process cannot be executed on every material. It depends critically on scotch tape bonding more strongly to a layer of the crystal than that layer bonds to other layers within the crystal. It also depends on very strong in-plane bonds within the material, which must support the large stresses associated with reaching such high aspect ratios; materials with weaker in-plane bonds will rip or crumble. In practice these materials are almost always processed further after they have been mechanically exfoliated, and the preparation process typically begins when they are pressed onto a silicon wafer to facilitate easy handling. Samples prepared in this way are called ‘exfoliated heterostructures.’ It is of course interesting that this process allows us to prepare atomically thin crystals, but another important advantage it provides is a way to produce monocrystalline samples without investing much effort in cleanly crystallizing the material; mechanical separation functions in these materials as a way to separate the domains of polycrystalline materials. Graphene was the first material to be more or less mastered in the context of mechanical exfoliation, but a variety of other van der Waals materials followed, adding substantial diversity to the kinds of material properties that can be integrated into devices composed of exfoliated heterostructures. Monolayer graphene is metallic at all available electron densities and displacement fields, but hexagonal boron nitride, or hBN, is a large bandgap insulator, making it useful as a dielectric in electronic devices. Exfoliatable semiconductors exist as well, in the form of a large class of materials known as transition metal dichalcogenides, or TMDs, including WSe2, WS2, WTe2, MoSe2, MoS2, and MoTe2. Exfoliatable superconductors, magnets, and other exotic phases are all now known, and the preparation and mechanical exfoliation of new classes of van der Waals materials remains an area of active research. Once two dimensional crystals have been placed onto a silicon substrate, they can be picked up and manipulated by soft, sticky plastic stamps under an optical microscope. This allows researchers to prepare entire electronic devices composed only of two dimensional crystals; these are known as ‘stacks.’ These structures have projections onto the silicon surface that are reasonably large, but remain atomically thin- capacitors have been demonstrated with gates a single atom thick, and dielectrics a few atoms thick. Researchers have developed fabrication recipes for executing many ofthe operations with which an electrical engineer working with silicon integrated circuits would be familiar, including photolithography, etching, and metallization. I think it is important to be clear about what the process of exfoliation is and what it isn’t. It is true that mechanical exfoliation makes it possible to fabricate devices that are smaller than the current state of the art of silicon lithography in the out-of-plane direction. However, these techniques hold few advantages for reducing the planar footprint of electronic devices, so there is no meaningful sense in which they themselves represent an important technological breakthrough in the process of miniaturization of commercial electronic devices. Furthermore, and perhaps more importantly, it has not yet been demonstrated that these techniques can be scaled to produce large numbers of devices, and there are plenty of reasons to believe that this will be uniquely challenging. What they do provide is a convenient way for us to produce two dimensional monocrystalline devices with exceptionally low disorder for which electron density and band structure can be conveniently accessed as independent variables. That is valuable for furthering our understanding of condensed matter phenomena, independent of whether the fabrication procedures for making these material systems can ever be scaled up enough to be viable for use in technologies.