Hormones can influence berry development and ripening

The BIA also established a GIS branch that encourages tribes and offers advice on tribal GIS development-although recent funding cutbacks have hampered this effort-and has stockpiled a considerable amount of tribal GIS data in their own library.68 Partly in response to these developments and partly because many Indian communities are deeply suspicious of the BIA-backed tribal governments where GIS managers are housed, a consortium of tribes in the northern Plains and another in the Rio Grande corridor have limited BIA and other federal agencies’ access to some of their databases, declaring some of their GIS proprietary in an attempt to protect sensitive and sacred geographical information. These and other longstanding problems associated with the allocation of political authority in Indian Country caused one geographer to raise questions about the path of GIs development, especially its transformative powers, surveillant capabilities, and political uses.69 Through experience working with GIS, many have come to see it as a contradictory technology that can both empower and marginalize people and communities. Arguments about the social impacts of GIS have grown in recent years, and a debate has surfaced in geography under the heading “GIS and Society.” One of the more interesting proposals emerging from that debate, and worth considering in Indian Country, is for development of “community-integrated GIS that focus on local empowerment through community, not government, 10 liter pot control of and access to digital geographic information. We think this review signals the arrival of geography as a small but important participant in American Indian studies.

Geographers are helping to illuminate the complexity and refinement of environmental modifications made by early Indians and perpetually revise our knowledge of these matters in virtually every region of the continent. They also are busy telling mainstream society that water is a vital cultural source-not just a scarce but necessary physical commodity-with the intent of altering the allocation and cost-benefit models used in managing it. Geographers continue to document land fraud through dispossession research in both historical and contemporary periods, sitting in courtrooms as expert witnesses to do so, and by trying to educate other geographers still steeped in traditions of seeing Indian land claims as an insignificant “interest” competing against “higher” uses. Geographers also continue to assert the centrality of land and place in Indian identity and to explore how attachments to place are manipulated by both individuals and the institutions that would control them. They continue to deconstruct the imprint of European and Euro-North American colonization and to unpack the sounds and silences in historical and contemporary maps and GIS, in part to promote more culturally sensitive applications of technology. Geographers are working with planners and tribal leaders to develop models for cooperative planning for future economic development. Increasingly, they are reflecting on their own positions as privileged researchers, teachers, and consultants. Finally, they are teaching all this to their students. By no means are we implying that everything is just fine in geography. For example, there is a sense among many AISG members that we can and should become more active and involved in issues of importance to Native people throughout North America, to the point of adopting advocacy stances more frequently.

Some of the work cited here leads in that direction, especially the accomplishments of those working on sources and development. However, much of the other work often seems to hold Indians at arm’s length. This may be because many geographers still look askance at colleagues who take on advocacy roles, believing that the mask of apolitical objectivity so often donned in the past is still worth wearing. Perhaps some are justified in their aloofness, preferring the detachment afforded by theoretical questions, or the solitude available in archives and libraries. On the other hand, theoretical and empirical work on material and ideal landscapes, identities, and represen- tations, and the research on historical and contemporary cartographies are among the fastest growing and most intellectually active areas of the field. It is also certain that there is much more that is needed in the field: historical studies exploring continuities in land use and governance for land claims; land use and place-name mapping and GIS for preservation of cultural sources; examinations of the spatial basis for self-governance and self determination to support sovereignty; critical approaches to the role of space and place in the social construction of “Indians” via public perceptions, legislative agendas, corporate intentions, and classroom teaching; continued work in deconstructing colonial legacies and postcolonial discourse in the effort to achieve genuine polyvocality; and analyses of the health care distribution system of the majority and its relationship to alternative medical systems available through local cultural practices. It is encouraging to see a diversity of topics and approaches being engaged with enthusiasm. And in all of it geographers increasingly realize that it is no longer possible to remain completely indifferent about the politics of their own research when studying North America’s Native communities, places where research, self-determination, and sovereignty now typically go hand in hand. Grapevine berry ripening can be divided into three major stages. In stage 1, berry size increases sigmoidally.

Stage 2 is known as a lag phase where there is no increase in berry size. Stage 3 is considered the ripening stage. Veraison is at the beginning of the ripening stage and is characterized by the initiation of color development, softening of the berry and rapid accumulation of the hexoses, glucose and fructose. Berry growth is sigmoidal in Stage 3 and the berries double in size. Many of the flavor compounds and volatile aromas are derived from the skin and synthesized at the end of this stage. Many grape flavor compounds are produced as glycosylated, cysteinylated and glutathionylated precursors and phenolics and many of the precursors of the flavor compounds are converted to various flavors by yeast during the fermentation process of wine. Nevertheless, there are distinct fruit flavors and aromas that are produced and can be tasted in the fruit, many of which are derived from terpenoids, fatty acids and amino acids. Terpenes are important compounds for distinguishing important cultivar fruit characteristics. There are 69 putatively functional, 20 partial and 63 partial pseudogenes in the terpene synthase family that have been identified in the Pinot Noir reference genome. Terpene synthases are multi-functional enzymes using multiple substrates and producing multiple products. More than half of the putatively functional terpene synthases in the Pinot Noir reference genome have been functionally annotated experimentally and distinct differences have been found in some of these enzymes amongst three grape varieties: Pinot Noir, Cabernet Sauvignon and Gewürztraminer. Other aromatic compounds also contribute significant cultivar characteristics. C13-norisoprenoids are flavor compounds derived from carotenoids by the action of the carotenoid cleavage dioxygenase enzymes. Cabernet Sauvignon, Sauvignon Blanc and Cabernet Franc are characterized by specific volatile thiols and methoxypyrazines. Enzymes involved in the production of these aromas have been recently characterized. Phenolic compounds play a central role in the physical mouthfeel properties of red wine; recent work relates quality with tannin levels. While the grape genotype has a tremendous impact on tannin content, the environment also plays a very large role in grape composition. The pathway for phenolic biosynthesis is well known, but the mechanisms of environmental influence are poorly understood. Ultimately, there is an interaction between molecular genetics and the environment. Flavor is influenced by climate, topography and viticultural practices. For example, water deficit alters gene expression of enzymes involved in aroma biosynthesis in grapes, which is genotype dependent, and may lead to increased levels of compounds, such as terpenes and hexyl acetate, 10 liter drainage collection pot that contribute to fruity volatile aromas. The grapevine berry can be subdivided into the skin, pulp and seeds. The skin includes the outer epidermis and inner hypodermis . A thick waxy cuticle covers the epidermis. The hypodermal cells contain chloroplasts, which lose their chlorophyll at veraison and become modified plastids; they are the sites of terpenoid biosynthesis and carotenoid catabolism. Anthocyanins and tannins accumulate in the vacuoles of hypodermal cells. Pulp cells are the main contributors to the sugar and organic acid content of the berries. Pulp cells also have a much higher set of transcripts involved in carbohydrate metabolism, but a lower set of transcripts involved in lipid, amino acid, vitamin, nitrogen and sulfur metabolism than in the skins. Concentrations of auxin, cytokinins and gibberellins tend to increase in early fruit development of the first stage. At veraison, these hormone concentrations have declined concomitant with a peak in abscisic acid concentration just before veraison.

Auxin prolongs the Stage 2 lag phase and inhibits anthocyanin biosynthesis and color development in Stage 3. Grapevine, a non-climacteric fruit, is not very sensitive to ethylene; however, ethylene appears to be necessary for normal fruit ripening. Ethylene concentration is highest at anthesis, but declines to low levels upon fruit set; ethylene concentrations rise slightly thereafter and peak just before veraison then decline to low levels by maturity. Ethylene also plays a role in the ripening of another non-climacteric fruit, strawberry. ABA also appears to be important in grape berry ripening during veraison when ABA concentrations increase resulting in increased expression of anthocyanin biosynthetic genes and anthocyanin accumulation in the skin. ABA induces ABF2, a transcription factor that affects berry ripening by stimulating berry softening and phenylpropanoid accumulation. In addition, ABA affects sugar accumulation in ripening berries by stimulating acid invertase activity and the induction of sugar transporters. It is not clear whether ABA directly affects flavor volatiles , but there could be indirect effects due to competition for common precursors in the carotenoid pathway. Many grape berry ripening studies have focused on targeted sampling over a broad range of berry development stages, but generally with an emphasis around veraison, when berry ripening is considered to begin. In this study, a narrower focus is taken on the late ripening stages where many berry flavors are known to develop in the skin. We show that that the abundance of transcripts involved in ethylene signaling is increased along with those associated with terpenoid and fatty acid metabolism, particularly in the skin.Cabernet Sauvignon clusters were harvested in 2008 from a commercial vineyard in Paso Robles, California at various times after veraison with a focus on targeting °Brix levels near maturity. Dates and metabolic details that establish the developmental state of the berries at each harvest are presented in Additional file 1. Berries advanced by harvest date with the typical developmental changes for Cabernet Sauvignon: decreases in titratable acidity and 2- isobutyl-3-methoxypyrazine concentrations and increases in sugar and color . Transcriptomic analysis focused on four harvest dates having average cluster °Brix levels of 22.6, 23.2, 25.0 and 36.7. Wines made in an earlier study from grapes harvested at comparable levels of sugars or total soluble solids to those in the present study showed clear sensory differences. Six biological replicates, comprising two clusters each, were separated into skins and pulp in preparation for RNA extraction and transcriptomic analysis using the NimbleGen Grape Whole-Genome Microarray.A note of caution must be added here. There are high similarities amongst members in certain Vitis gene families , making it very likely that cross-hybridization can occur with probes on the microarray with high similarity to other genes. We estimate approximately 13,000 genes have the potential for cross-hybridization, with at least one probe of a set of four unique probes for that gene on the microarray potentially cross-hybridizing with probes for another gene on the microarray. Genes with the potential for crosshybridization have been identified and are highlighted in light red in Additional file 2. The rationale to include them is that although individual genes can not be uniquely separated, the probe sets can identify a gene and its highly similar gene family members, thus, providing some useful information about the biological responses of the plant. An additional approach was taken, removing cross-hybridizing probes before quantitative data analysis . Many of the significant genes were unaffected by this processing, but 3600 genes were completely removed from the analysis. Thus, it was felt that valuable information was lost using such a stringent approach. The less stringent approach allowing for analysis of genes with potential crosshybridization was used here in the rest of the analyses. To assess the main processes affected by these treatments, the gene ontologies of significantly affected transcripts were analyzed for statistical significance using BinGO. Based on transcripts that had significant changes in abundance with °Brix level, 230 biological processes were significantly over represented in this group . The three top over represented processes were response to abiotic stress, biosynthetic process, and response to chemical stimulus, a rather generic set of categories.

We observe hysteretic switching of the resistivity as a function of applied current

The closest imaginable analog of the tBLG/hBN Chern magnet in this system is one in which interactions favor the formation of a valley-polarized ferromagnet, at which point the finite Chern number of the valley subbands would produce a Chern magnet. This was widely assumed to be the case at the time of the system’s discovery. There is now substantial evidence that this system instead forms a valley coherent state stabilized by its spin order, which would require a new mechanism for generating the Berry curvature necessary to produce a Chern magnet. In general I think it is fair to say that the details of the microscopic mechanism responsible for producing the Chern magnet in this system are not yet well understood. In light of the differences between these two systems, there was no particular reason to expect the same phenomena in MoTe2/WSe2 as in tBLG/hBN. As will shortly be explained, current-switching of the magnetic order was indeed found in MoTe2/WSe2. The fact that we find current-switching of magnetic order in both the tBLG/hBN Chern magnet and the AB-MoTe2/WSe2 Chern magnet is interesting. It may suggest that the phenomenon is a simple consequence of the presence of a finite Chern number; i.e., that it is a consequence of a local torque exerted by the spin/valley Hall effect, which is itself a simple consequence of the spin Hall effect and finite Berry curvature. These ideas will be discussed in the following sections. In spin torque magnetic memories, electrically actuated spin currents are used to switch a magnetic bit. Typically, hydroponic vertical garden these require a multi-layer geometry including both a free ferromagnetic layer and a second layer providing spin injection.

For example, spin may be injected by a nonmagnetic layer exhibiting a large spin Hall effect, a phenomenon known as spin-orbit torque. Here, we demonstrate a spin-orbit torque magnetic bit in a single two-dimensional system with intrinsic magnetism and strong Berry curvature. We study AB-stacked MoTe2/WSe2, which hosts a magnetic Chern insulator at a carrier density of one hole per moir´e superlattice site. Magnetic imaging reveals that current switches correspond to reversals of individual magnetic domains. The real space pattern of domain reversals aligns with spin accumulation measured near the high Berry curvature Hubbard band edges. This suggests that intrinsic spin or valley Hall torques drive the observed current-driven magnetic switching in both MoTe2/WSe2 and other moir´e materials. The switching current density is significantly less than those reported in other platforms, suggesting moir´e heterostructures are a suitable platform for efficient control of magnetic order. To support a magnetic Chern insulator and thus exhibit a quantized anomalous Hall effect, a two dimensional electron system must host both spontaneously broken time-reversal symmetry and bands with finite Chern numbers. This makes Chern magnets ideal substrates upon which to engineer low-current magnetic switches, because the same Berry curvature responsible for the finite Chern number also produces spin or valley Hall effects that may be used to effect magnetic switching. Recently, moir´e heterostructures emerged as a versatile platform for realizing intrinsic Chern magnets. In these systems, two layers with mismatched lattices are combined, producing a long-wavelength moir´e pattern that reconstructs the single particle band structure within a reduced superlattice Brillouin zone. In certain cases, moir´e heterostructures host superlattice minibands with narrow bandwidth, placing them in a strongly interacting regime where Coulomb repulsion may lead to one or more broken symmetries.

In several such systems, the underlying bands have finite Chern numbers, setting the stage for the appearance of anomalous Hall effects when combined with time-reversal symmetry breaking. Notably, in twisted bilayer graphene low current magnetic switching has been observed, though consensus does not exist on the underlying mechanism. Although these magnets occur in an atomic crystal, they are composed entirely of electrons we have forced into the system with an electrostatic gate, and as a result we can expect their magnetizations to be considerably smaller than fully spin-polarized atomic crystals. We will use the nanoSQUID microscope to image these magnetic phases. An optical image of the ABC trilayer graphene device used to produce data for the publications is presented in Fig. 7.5A. A black dashed lineoutlines the region we will be imaging using the nanoSQUID microscope. A nanoSQUID image of this region using AC bottom gate contrast is presented in Fig. 7.5B. This magnetic image was taken in the same phase in which we observe magnetic hysteresis, as presented in Fig. 7.4E. Clearly the system is quite magnetized; we also see evidence of internal disorder, likely corresponding to bubbles between layers of the heterostructure. We can park the SQUID over a corner of the device and extract a density- and displacement field-tuned phase diagram of the magnetic field generated by the magnetization of the device; this is presented in Fig. 7.5C. Electronic transport data of the same region is presented in Fig. 7.5D. The spin magnet has only a weak impact on electronic transport, but the valley ferromagnet couples extremely strongly to electrical resistance. The system also supports a pair of superconductors, including a spin-polarized one; these phases are subjects ofcontinued study. Capacitance data over the same region of phase space is presented in Fig. 7.5E. The first systems with nonzero Chern numbers to be discovered were systems with quantum Hall effects. Quantum Hall insulators behave a lot like Chern magnets but are generally realized at much higher magnetic fields, and Berry curvature in these systems comes from the applied magnetic field, not from band structure. The fact that resistance in these materials is an intrinsic property and not an extrinsic one had implications for metrology that were immediately obvious to the earliest researchers that encountered the phenomenon. All of these devices have resistances that depend only on fundamental physical constants, so a resistance standard composed of these materials need not obey any particular geometric constraints, and can thus be easily replicated. The case for quantum Hall resistance standards was strong enough for the the National Institute for Standards and Technology to rapidly adopt them, and today the Ohm is defined by a graphene quantum Hall resistance standard at NIST. There are some downsides to the quantum Hall resistance standard.

The modern voltage standard is a superconducting integrated circuit known as the Josephson voltage standard; it uses Shapiro steps to relate the absolute size of a set of voltage steps to a frequency standard. Because the voltage standard and resistance standard are independently fixed to physical phenomena, current standards are necessarily defined by the relationship between these two different standards. Unfortunately, the superconducting integrated circuits used as Josephson voltage standards must be operated in very low ambient magnetic field, because large magnetic fields destroy superconductivity. This makes them incompatible with the graphene quantum Hall resistance standard, which must operate in large magnetic fields, generally B > 5T. This is a surmountable problem- in practice it is handled by storing the two standards in different cryostats, or with significant magnetic shielding between them- but the significant distance separating the standards reduces the precision with which the current standard can be defined with respect to our current resistance and voltage standards. One possible way to resolve this conflict is to replace the quantum Hall resistance standard with a Chern magnet resistance standard. Chern magnets show quantized anomalous Hall effects at low or zero magnetic field, meaning they can be installed in very close proximity to Josephson voltage standards in calibration cryostats. Unfortunately, doped topological insulators have such small band gaps that even at the base temperatures of dilution fridges, vertical vegetable tower there is enough thermal activation of electrons into the bulk to limit the precision of quantization of the quantized anomalous Hall effect in these systems. This made the class of Chern magnets discovered in 2013 unsuitable as replacements for the graphene quantum Hall resistance standard. Since intrinsic Chern magnets have now been discovered, and are observed to have band gaps considerably exceeding those of doped topological insulators, it might make sense to replace the graphene quantum Hall resistance standard with an intrinsic Chern magnet resistance standard. The ease of replication of the fabrication process of MoTe2/WSe2 makes that material particularly intriguing as a candidate material for a new resistance standard, but over the past few years new intrinsic Chern magnets have been discovered almost every year, so we may soon be discussing much better materials for this application. In any case, it seems possible and perhaps even likely that Chern magnets will supplant quantum Hall systems as resistance standards in the near future.Of course, that fact didn’t take away the many advantages of magnetic memories, and magnetic memories still persist in a variety of niche applications that depend particularly strongly on one of these advantages. Many computers destined to spend their lives in space still use hard drives, and sensors designed to operate over a wide range of temperatures and with intermittent access to power often use non-volatile magnetic memories as well. This has led researchers to search for phenomena and device architectures that allow magnetic order to be switched either with electrical currents or electrostatic gates.

Until recently, the best technology available capable of electronic switching of magnetism used spin-orbit torques. In a spin-orbit torque device, current through a system with a strong spin Hall effect pumps spin into a separate magnet, which is eventually inverted by the torque exerted by those spins. This technology has matured considerably over the past few years, producing a cascade of new records for low current density magnetic switching and even a few consumer products in the memory market. The discovery of the first intrinsic Chern magnets produced a fascinating surprise for this field. The exotic orbital magnet in twisted bilayer graphene was found to be switchable with extremely small pulses of current, and the resulting current-switchable magnetic bits displaced previously realized spin-orbit torque devices as the ultimate limit in low-current control of magnetism. A flurry of theoretical investigation of these systems followed, dedicated primarily to identifying and generalizing the mechanism underlying current control of magnetism in these systems. A few years later, AB-MoTe2/WSe2 joined twisted bilayer graphene, with a similarly small magnetic switching current. In the intervening time, a new phenomenon had been observed- switching of a Chern magnet with an electrostatic gate, in twisted monolayer/bilayer graphene. All of these phenomena represent newly discovered and now more or less well understood mechanisms for controlling magnetic bits electronically, and by the performance metrics used in the literature they reign supreme. Several electronic switching phenomena known in intrinsic Chern magnets are summarized in Fig. 8.3. Chern magnets differ from the magnetic materials used in more traditional magnetic memories in a wide variety of intriguing ways other than their electronic switch ability. Chern magnets are not metals and thus don’t have the same limitations as metallic magnetic memories. For example, the resistance of a Chern magnet is independent of its size, depending only on fundamental physical constants. This makes the resistance of a Chern magnet completely insensitive to miniaturization. Dissipation does occur in Chern magnets, but it occurs only at the contacts to the Chern magnet, so once electrons enter the crystal they can undergo very long range transport completely free of dissipation. Chern magnets are atomically thin in the out-of-plane direction, and of course if they are separated by insulators they can easily be stacked to increase magnetic bit density. Chern magnets are two dimensional materials, and two dimensional materials already have small radiation cross-sections relative to three dimensional crystals like silicon, but the conduction path through a Chern magnet is both one dimensional and topologically protected, so it is overwhelmingly likely that Chern magnet memories would be even more radiation hard than the thin semiconducting films that form the current state of the art. All of these ideas make Chern magnets interesting candidates as substrates for magnetic memories of the distant future. Of course none of these ideas have been implemented in technologies yet, and that is because intrinsic Chern magnets have only been realized at fairly low temperatures . All of the magnetic memory applications we’ve discussed depend critically on the discovery of intrinsic Chern magnets at considerably higher temperatures, and ideally room temperature.

It is also not very useful for probing metastable states

It also depends on very strong in-plane bonds within the material, which must support the large stresses associated with reaching such high aspect ratios; materials with weaker in-plane bonds will rip or crumble. In practice these materials are almost always processed further after they have been mechanically exfoliated, and the preparation process typically begins when they are pressed onto a silicon wafer to facilitate easy handling. Samples prepared in this way are called ‘exfoliated heterostructures.’ It is of course interesting that this process allows us to prepare atomically thin crystals, but another important advantage it provides is a way to produce monocrystalline samples without investing much effort in cleanly crystallizing the material; mechanical separation functions in these materials as a way to separate the domains of polycrystalline materials. Graphene was the first material to be more or less mastered in the context of mechanical exfoliation, but a variety of other van der Waals materials followed, adding substantial diversity to the kinds of material properties that can be integrated into devices composed of exfoliated heterostructures. Monolayer graphene is metallic at all available electron densities and displacement fields, but hexagonal boron nitride, or hBN, is a large bandgap insulator, making it useful as a dielectric in electronic devices. Exfoliatable semiconductors exist as well, hydroponic bucket in the form of a large class of materials known as transition metal dichalcogenides, or TMDs, including WSe2, WS2, WTe2, MoSe2, MoS2, and MoTe2.

Exfoliatable superconductors, magnets, and other exotic phases are all now known, and the preparation and mechanical exfoliation of new classes of van der Waals materials remains an area of active research. Once two dimensional crystals have been placed onto a silicon substrate, they can be picked up and manipulated by soft, sticky plastic stamps under an optical microscope. This allows researchers to prepare entire electronic devices composed only of two dimensional crystals; these are known as ‘stacks.’ These structures have projections onto the silicon surface that are reasonably large, but remain atomically thin- capacitors have been demonstrated with gates a single atom thick, and dielectrics a few atoms thick. Researchers have developed fabrication recipes for executing many of the operations with which an electrical engineer working with silicon integrated circuits would be familiar, including photolithography, etching, and metallization. I think it is important to be clear about what the process of exfoliation is and what it isn’t. It is true that mechanical exfoliation makes it possible to fabricate devices that are smaller than the current state of the art of silicon lithography in the out-of-plane direction. However, these techniques hold few advantages for reducing the planar footprint of electronic devices, so there is no meaningful sense in which they themselves represent an important technological breakthrough in the process of miniaturization of commercial electronic devices. Furthermore, and perhaps more importantly, it has not yet been demonstrated that these techniques can be scaled to produce large numbers of devices, and there are plenty of reasons to believe that this will be uniquely challenging. What they do provide is a convenient way for us to produce two dimensional monocrystalline devices with exceptionally low disorder for which electron density and band structure can be conveniently accessed as independent variables.

That is valuable for furthering our understanding of condensed matter phenomena, independent of whether the fabrication procedures for making these material systems can ever be scaled up enough to be viable for use in technologies. Consider the following procedure: we obtain a pair of identical two dimensional atomic crystals. We slightly rotate one relative to the other, and then place the rotated crystal on top of the other . The resulting pattern brings the top layer atoms in alignment with the bottom layer atoms periodically, but with a lattice constant that is different from and in practice often much larger than the lattice constant of the original two atomic lattices. We call the resulting lattice a ‘moir´e superlattice.’ The idea to do this with two dimensional materials is relatively new, but the notion of a moir´e pattern is much older, and it applies to many situations outside of condensed matter physics. Pairs of incommensurate lattices will always produce moir´e patterns, and there are many situations in daily life in which we are exposed to pairs of incommensurate lattices, like when we look out a window through two slightly misaligned screens, or try to take pictures of televisions or computer screens with our camera phones. Of course these ‘crystals’ differ pretty significantly from the vast majority of crystals with which we have practical experience, so we’ll have to tread carefully while working to understand their properties. To start with, if we attempt to proceed as we normally would- by assigning atomicorbitals to all of the atoms in the unit cell, computing overlap integrals, and then diagonalizing the resulting matrix to extract the hybridized eigenstates of the system- we would immediately run into problems, because the unit cell has far too many atoms for this calculation to be feasible. Some moir´e superlattices that have been studied in experiment have thousands of atoms per unit cell. There exist clever approximations that allow us to sidestep this issue, and these have been developed into very powerful tools over the past few years, but they are mostly beyond the scope of this document. I’d like to instead focus on conclusions we can draw about these systems using much simpler arguments.

The physical arguments justifying the existence of electronic bands apply wherever and whenever an electron is exposed to an electric potential that is periodic, and thus has a set of discrete translation symmetries. For this reason, even though the moir´e superlattice is not an atomic crystal, we can always expect it to support electronic band structure for the same reason that we can always expect atomic crystals to support band structure. Two crystals with identical crystal symmetries will always produce moir´e superlattices with the same crystal symmetry, so we don’t need to worry about putting two triangular lattices together and ending up with something else.Another property we can immediately notice is that the electron density required to fill a moir´e superlattice band is not very large. This can be made clear by simply comparing the original atomic lattice to a moir´e superlattice in real space . Full depletion of a band in an atomic crystal requires removing an electron for every unit cell , and full filling of the band occurs when we have added an electron for every unit cell. We have already discussed how this is not possible for the vast majority of materials using only electrostatic gating, because the resulting charge densities are immense. Full depletion of the moir´e band, on the other hand, requires removing one electron per moir´e unit cell, and the moir´e unit cell contains many atoms . So the difference in charge density between full filling and full depletion of an electronic band in a moir´e superlattice is actually not so great , and indeed this is easily achievable with available technology. Before we go on, I want to make a few of the limitations of this argument clear. There are two things this argument does not necessarily imply: the moir´e bands we produce might not be near the Fermi level of the system at charge neutrality, and the bandwidth of the moir´e superlattice need not be small. In the first case, we won’t be apply to modify the electron density enough to reach the moir´e band, and in the latter, stackable planters we won’t be able to fill the moir´e band’s highest energy levels using our electrostatic gate. We know of examples of real systems with moir´e superlattice bands that fail each of those criteria. But if these moir´e superlattice bands are near charge neutrality, and if their bandwidths are small, then we should be able to easily fill and deplete them with an electrostic gate. A variety of scanning probe microscopy techniques have been developed for examining condensed matter systems. It’s easy to justify why magnetic imaging might be interesting in gate-tuned two dimensional crystals, but magnetic properties of materials form only a small subset of the properties in which we are interested. Scanning tunneling microscopy is capable of probing the atomic-scale topography of a crystal as well as its local density of states, and a variety of scanning probe electrometry techniques exist as well, mostly based on single electron transistors. It’s worth pointing out that if you’re interested specifically in performing a scanning probe microscopy experiment on a dual-gated device, then these techniques both struggle, because the top gate both blocks tunnel current and screens out the electric fields to which a single electron transistor would be sensitive. Magnetic fields have an important advantage over electric fields: most materials have very low magnetic susceptibility, and thus magnetic fields pass unmodified through the vast majority of materials . This means that magnetic imaging is more than just one of many interesting things one can do with a dual-gated device; in these systems, magnetic imaging is a member of a very short list of usable scanning probe microscopy techniques.

The simplest way in which we can use our nanoSQUID magnetometry microscope is as a DC magnetometer, probing the static magnetic field at a particular position in space . There are situations in which this is a valuable tool, and we will look at some DC magnetometry data shortly, but in practice our nanoSQUID sensors often suffer from 1/f noise, spoiling our sensitivity for signals at low or zero frequency. One of the primary advantages of the technique is its sensitivity, and to make the best of the sensor’s sensitivity we must measure magnetic fields at finite frequencies. We have already discussed how we can use electrostatic gates to change the electron density and band structure of two dimensional crystals. We will discuss shortly a variety of gate-tunable phenomena with magnetic signatures that appear in these systems. It follows, of course, that we can modulate the magnetic fields emitted by these electronic phases and phenomena by modulating the voltages applied to the electrostatic gates we use to stabilize these phases. This is illustrated in Fig. 1.15C: an AC voltage is applied to the bottom gate relative to the two dimensional crystal, and the local magnetic field is sampled at the same frequency by the SQUID. We can use this techniqueto extract δV δB at an array of positions above the two dimensional crystal. This technique is very simple and powerful, but it has a few important drawbacks. It can only produce a quantitative measurement of B if the same scan is performed for a large set of gate voltages, so that δV δB can be integrated. Many ferromagnets, for example, can be locked into quantum states that aren’t their ground states using a ferromagnetic hysteresis loop, and rapidly tuning the electron density tends to relax these phases to their ground states. So whenever we are interested in probing metastable magnetic states, we need to be careful about using this measurement method. Of course, we can also modulate the magnetic field through the nanoSQUID by modulating the position of the nanoSQUID. Since the magnetic field varies rapidly in space, we can often expect to get strong signals when we probe δB δx this way . The position of the nanoSQUID is rapidly modulated using a piezoelectric tuning fork pressed against the side of the nanoSQUID sensor; the details of the tuning fork hardware and measurement are discussed further in the appendix. This measurement method allows us to use the nanoSQUID to probe metastable or even non-gate-tunable magnetic phenomena at finite frequency. It has a few drawbacks of its own, though. The nanoSQUID sensors have parasitic sensitivities to local temperature and electric potential , and if these vary in space the resulting signals will contaminate our magnetic field data. As a result, whenever we use this contrast mechanism we must try to extract differences between two different magnetic states if we want quantitatively precise information about the magnetic field. We can also apply an AC current in the plane of the two dimensional crystal. Large currents will emit detectable magnetic fields through the Biot-Savart law, and under those conditions we can use this contrast mechanism to reconstruct the current density through our two dimensional crystal.

Systems can be complicated but not complex and complex but not complicated

Our studies suggest a combination of its transmission dynamics and how they are affected by management issues, such as the quantity of shade and the density of planting, plus a variety of control from above elements represent a source of control, which sometimes fails .Understanding the general structure of ecological communities has long been a central goal of ecology, from Haeckel to us. Empiricists commonly, and probably necessarily, focus on the community of X, which is to say an assemblage of species defined by some set of criteria: the fungal community of Lake Wobegon, the community of gall-forming insects of oak trees, the microbial community of the human gut, the community of four ciliate species, and so on. Theoreticians perhaps feel less constraint. In the present article, we have defined the community as the herbivores of the coffee plant and their associates, in which top-down control is the goal of management . The framing of regulation from above from theoretical ecology translates directly into biological control from agroecology. Indeed, in agroecology regulation from above is elementary, in that the top-down agents are frequently obvious . However, stopping at that level of understanding may obscure more than clarify, stacking pots much as the simple phrase controlled from above may indeed obscure . Precisely how that control is affected may involve many complicated interactions and contingencies, making, we argue, the framing of complex systems a necessary one.

The fungus that attacks the scale is most efficient when the scale is hyper dense at a local level, something that cannot happen unless it is under the protection of a mutualistic ant, which deters the other predator , which, however, is able to take advantage of a spatial pattern that is self-organized through a Turing-like process, and so forth. Indeed, we argue that the understanding we claim to have of this system so far comes from detailed study, both empirical and theoretical, and, most importantly is dramatically enriched through the application of some of the concepts newly developed in the distinct field of complex systems. Almost 10 years ago, some of us published a summary of this overall system , suggesting that understanding it required more than just an identification of who eats whom. This update emphasizes that point. Our narrative in the present article is perhaps a bit heterodox. We study a very complicated system , and we seek to understand it through theoretical ecology. To some, at least in the recent past, this might imply a large-scale computer model or sophisticated data manipulation. Our approach is distinct, recalling the wisdom of Levins’ paper on the strategy of model building. We seek to understand, at a deep level, how this system works, not necessarily for the purpose of predicting its future state. We offer theoretical propositions, many of which are stimulated by mathematical arguments, but we do not seek what postmodern thinkers would have called a “totalizing discourse” with a large-scale model. Rather, we seek to use recent advances in complex systems as a way of stimulating thought, with the mathematical models that go along with them as “educating our intuition,” as Levins urged frequently. The models themselves represent approximate metaphors for this complex reality, all fitting into a hierarchy of understanding , which is mainly qualitative even though originally formulated through mathematical reasoning.Furthermore, our claim that this is a complex reality is meant to imply something deeper than the obvious claim that it is complicated. It is a complex system.

For example, if the only players in the system were Azteca, C. viridis, and A. orbigera, the system wouldn’t be exceptionally complicated , but it would be a complex system, because it would have a clear emergent property . Even adding the phorid would mean two predators and two prey, but the spatial pattern that emerges and the dependence of one system on a second system operative at a completely distinct time scale is an essential structural component of the system as a whole. The emergence would defy understanding if only the separate component parts were studied, which is to say if it were approached from a purely reductionist perspective. If the only players were the ants and the coffee berry borer, but the ants did not exhibit trait-mediated indirect interactions, the system would be complicated but not necessarily complex. This distinction between complicated and complex is important for our narrative. Because it is a complex system, it requires a more holistic approach to understand and manage, and there’s more potential for surprise . A merely complicated system would not have these characteristics. That our model system is coffee is significant in several ways. First, traditional coffee management, with its characteristic shade trees, helps to create landscapes that are friendly to biodiversity conservation . It is a classic high-quality matrix for all sorts of animals and plants. Second, it involves a commodity that is of extremely high value, sometimes the main source of wealth for entire countries. Third, it is the basis of livelihood for millions of small farmers the world over. Fourth, when properly cultivated with shade, it joins other agroforestry systems in the worldwide struggle against climate change. Given all that, understanding the details of its operation would seem worthwhile, and marshaling recent insights from complex systems to anchor that narrative brings one of the classical questions of community ecology into focus as a practical issue. Consequently, besides being of potential importance for ecology, it makes ecology important for some practical aspects of this important crop.

It is, for example, evident from only a qualitative understanding of the control from above system that a key element is the species of ant that nests in the shade trees and that, if those shade trees are eliminated , the whole control structure will be dramatically interrupted. Questions also arise about generality. Does this model system reflect something more general about the structure of control from above, or does it simply reflect interactions of this one particular system? First, most terrestrial systems have a spatial component involved, and framing the spatial component as one in which a subsystem operates to effectively create a spatial pattern in which other subsystems may operate is likely to apply frequently. Indeed, the idea of a predator–prey system generating a Turing pattern may be increasingly appreciated as more research programs interrogate the idea . Second, population dynamics unfolding on this space are likely to be nonlinear, and this nonlinearity will frequently be of the form that critical transitions lead to an alternative equilibrium within hysteretic zones, which may be multiple and constrain the herbivores above which control is being exerted . Third, the idea that multiple herbivores have their own suite of controlling factors is almost certainly true, but the idea that there will be connections, even if weak, with other sub-components of the control from above, is likely to be characteristic. These three generalities encompass the complex systems topics of Turing pattern formation, critical transitions, hysteresis, chaos, basin boundary collisions, trait-mediated indirect interactions, and scale-dependent spatial processes, all of which are exemplified in our model system, and certainly may be embedded in other systems of control from above. The message is not that these particular topics are essential but, rather, that control from above is not the one-dimensional process frequently imagined of a predator guild preying on a prey guild but, instead, a complex community of predators and parasites and diseases that interact with one another in complicated ways to eventually generate a self-organized system that exerts effective control over the herbivory. Much as one might say that the vertebrate circulatory system is responsible for bringing oxygen to each cell in the body, one might similarly simplify and say that natural enemies in the coffee agroecosystem are responsible for the regulation of potential pests. However, it is the heart, the veins, the arteries, exchanges across membranes, strawberry gutter system and so forth that tell the real story of how the delivery of oxygen to our tissues actually happens. It is a complex system, the details of which are certainly of interest to health and healing. Similarly, in our agroecosystem example, the subsystem that creates large-scale pattern sets the stage for a subsystem involving a predator and a disease that affect regulation of one pest, whereas the community structure of ants determines the efficiency of their predacious activities on a second pest and the disease that helps regulate the first pest is an antagonist to the third pest. This is all to say that yes, it is control from above, but that control is delivered through the ecological complexity of the community of natural enemies. It is misleading to suggest that listing the natural enemies and merely identifying them as such is sufficient. It is only through the lens of the reality of its state as a complex system that we may gain full appreciation of the ecological principle of top-down control, which then can be fully exploited in attempts to aid the management of this important agroecosystem. There is something of a conundrum in this narrative.

Although it is clear that knowledge of all the ecological complexity could inform practical decisions that producers might want to make, is such detailed knowledge really necessary to provide useful advice to the farmer? If ecological knowledge of the particular system is primitive, could well meaning agroecological advisors give advice that will have unintended negative consequences? Post-WWII industrial agriculture enthusiasts embraced DDT and other pesticides creating the well-known pesticide treadmill that haunts us still today. Indeed, that is one of the issues that caused many environmentally conscious analysts to call for the science of ecology to be more actively embraced by agricultural planners. However, ecology is complicated. Secondary consequences cannot necessarily be predicted short of detailed study and the normal rules of thumb extrapolated from a few experiments or extralocal traditions could backfire. Perhaps the famous medical practitioner’s oath primum non nocere makes sense in agriculture as well. As farmers seek solutions to perceived problems on their farms, agroecologists rightly wish to use the science of ecology to help. However, frequently , ecological knowledge of the particular system is not very well understood because it is only recently that agroecological advocates have begun to break into the mainstream, and the basic research required to understand some of the vexing problems the farmers face has yet to be done. It is therefore common to use a few rules of thumb: avoid monocultures, don’t poison your natural enemies, maintain healthy soil, and so on. Such rules of thumb, on the basis of perceived ecological rules, for the most part make sense and probably conform well to the admonition primum non nocere. However, it is worth remembering the dust bowl, pest resurgence following pesticides, ocean dead zones, and other consequences that we live with today because a previous generation of farm advocates, equally sincere in their desires to help farmers, were prematurely confident in the ability of their tools to help the farmer.There are a growing number of examples of a positive relationship between diversity and ecosystem service. As an ecosystem service, pollination can increase the fruit or seed quality or quantity of 39 of the world’s 57 major crops, and a more diverse pollinator community has been found to improve pollination service . For some crops, wild bees are more effective pollinators on a per visit basis than honey bees and/or can functionally complement the dominant visitor. A less explored reason is that in diverse communities, interspecific interactions potentially alter behaviour in ways that increase pollination effectiveness. Little is known about how community composition affects pollinator behaviour and the role such species interactions play in determining diversity–ecosystem service relationships. Interspecific interactions can result in non-additive impacts of diversity on ecosystem functions. Examples include the facilitation of resource capture in diverse groups of aquatic arthropods, and non-additive increases in pest suppression and alfalfa production in enclosures with diverse natural enemy guilds. In diverse communities, one mechanism by which species interactions may augment function is the potential to modify the behaviour and the resulting effectiveness of the ecosystem service providers. Interactions with non-Apis bees cause Apis mellifera L. to move more often between rows of sunflower, increasing their pollination efficiency. Such changes in pollinator movement are particularly important in crop species with separate male and female flowers, and those with self-incompatibility .

Increases in education spending are one of the most notable aspects of the budget request

The BLS estimates that Colorado’s unemployment rate in December 2017 was 3.0 percent, while Denver’s unemployment rate was 2.9 percent . Earlier in 2017, the state unemployment reached a record low of 2.3 percent. This is among the lowest unemployment rates recorded by any state in recent decades. The other six metropolitan statistical areas tracked by the BLS averaged an unemployment rate of 3.4 percent. Fort Collins had the lowest unemployment rate at 2.5 percent. Only two cities had an unemployment rate greater than the national unemployment rate of 4.1 percent—Grand Junction and Pueblo . Personal income growth among state residents reached nearly 8 percent in 2014, but growth slowed over the next two years. This trend reversed in 2017 as personal income growth increased to 5.4 percent. This growth rate exceeds the national rate by more than 2 percentage points . According to the OSPB, per capita income and wage growth in Colorado over the past year also outpaced the national figures. With regard to party registration in Colorado, voters were nearly evenly divided among Democratic, Republican, and unaffiliated categories at the time of the 2016 election. According to voter registration data from the Secretary of State’s office, the state had nearly 3.3 million active voters in November 2016. A plurality of these voters registered as unaffiliated , hydroponic nft while the share of Democratic and Republican were nearly equivalent. In February 2018, the number of active voters decreased relative to November 2016 by about 1.7 percent.

This may be partially attributable to controversy surrounding President Donald Trump’s Commission on Voter Fraud, which made data privacy a concern after the commission requested, “voluminous information on voters, including names, addresses, dates of birth, political affiliations and the last four digits of Social Security numbers, along with voting history” . Fifteen months after the 2016 election, the proportion of unaffiliated voters in the state increased to 36.3 percent, while the share of Democrats and Republicans decreased to 31.1 percent and 30.8 percent, respectively . This change is likely driven by primary election reforms approved by voters in the 2016 election. Voters overwhelmingly approved Proposition 107, which adopted a presidential primary in lieu of the existing caucus system, and Proposition 108, which allowed unaffiliated voters to participate in the party primary of their choice. Previously, unaffiliated voters were prohibited from participating in any primary elections or caucus meetings. Because this reform allows unaffiliated voters to participate in the primary of their choosing, it appears that a substantial number of Coloradans changed their party registration status to take advantage of this new opportunity. Colorado’s economic trajectory remains generally positive. In late March, the Governor’s Office of State Planning and Budgeting released its revised economic forecast. The report summarized the condition of Colorado’s economy by stating, “Colorado’s economy is on solid footing with strong employment growth and expectations of an ongoing expansion. New business formation continues to grow, while Colorado oil production is at record levels. Although much of the state’s economic growth has occurred along the Front Range, stabilizing farmland values and increases in energy prices and production have recently supported rural areas as well. Looking forward, higher costs of living and tight labor market conditions are expected to constrain further growth through the forecast period” .

The OSPB characterized the 3.1 percent increase in General Fund revenue in the 2016–2017 fiscal year as“modest,” while projecting a larger revenue increase for the 2017–2018 fiscal year of 12.9 percent. According to the OSPB, the revenue forecast for the 2018–2019 fiscal year is projected to grow from $11.6 to $12.0 billion. The more substantial increase during the 2017–2018 fiscal year is attributable to “strong economic growth, a rebound in corporate income tax receipts, robust investment income gains, and federal tax changes” . Regarding the latter, the $1.5 trillion tax reform package signed into law by President Trump in December 2017, among other things, lowered individual income tax rates and nearly doubled the standard deduction for individuals and families. Increasing the standard deduction makes it likely that more tax returns would be filed using the standard deduction instead of itemized deductions. This, combined with reduced tax rates, mean that individuals are likely paying less in federal taxes despite having a larger taxable income. Because Colorado state taxes are 4.63 percent of each individual’s federal taxable income, the state projects to receive greater tax revenue while most state residents can expect to pay less in federal taxes. This is particularly notable in Colorado where TABOR requires any tax increases to go before the electorate for approval in a general election. Accordingly, while the state income tax rate remains unchanged, the tax cut at the federal level in effect imposes a tax increase at the state level as a result of many tax returns reporting a greater taxable income. Most of the state’s General Fund revenue comes from individual and corporate taxes. The OSPB reports that income tax with holdings increased by more than 9 percent over the past year . Partly because of the new federal tax law, the OSBP claims that there is a “high degree of uncertainty surrounding the forecast for individual income tax collections” . While individual income tax revenues increased by 3.6 percent in the prior fiscal year, the state projects further increases in excess of 13 percent in the current fiscal year. Income tax revenue is projected to grow by an additional 1.7 percent in the next fiscal year.

Further good news regarding state revenue collection is that corporate income taxes are projected to grow by 38.6 percent in the current fiscal year after a 21.9 percent decrease last year. This would be the first time in the last five years that corporate tax revenues increased. Likewise, sales tax revenues are projected to grow by 9.6 percent in the current fiscal year and 4.8 percent the following year . Part of this increase in sales tax revenue is attributable to the increased special sales tax on recreational marijuana purchases, which increased from 10 percent to 15 percent following the passage of Senate Bill 17-267 in 2017. According to the OSPB forecast, individual income taxes constitute $7.65 billion of the expected $11.6 billion in general fund revenue for the 2017–2018 fiscal year. Sales and use tax revenue are projected at $3.5 billion. Corporate income tax revenue is expected to provide an additional $0.71 billion . These three revenue sources constitute 95 percent of total General Fund revenue. State revenues were relatively stable around the $10 billion mark over the past three years. The $11.6 billion revenue estimate for the upcoming fiscal year represents a 12.6 percent increase, which would be the largest growth in state revenue since 2005 when revenues increased 13.1 percent. Such an increase would be similar in magnitude to the percentage of lost revenue that occurred during the Great Recession . Since 2009, individual income tax revenue has increased each year, hydroponic channel although not in a strictly linear pattern. Sales and use tax revenue has also increased each year over the past decade. Corporate tax revenue has exhibited greater volatility. Since the Great Recession when corporate income tax revenue fell to less than $300 million, corporate taxes rebounded over the next five years to greater than $700 million in the 2013–2014 fiscal year. The OSPB cites global economic factors, such as a strong dollar and decreases in oil, gas, and other commodity prices as the catalysts for the three-year decline in corporate tax revenue beginning in 2014. After falling 21.9 percent in the third year of this recent decline, corporate tax revenue is estimated to increase by 38.6 percent in the current fiscal year. In addressing how federal tax reform may affect state corporate tax revenue, the OSPB projects continued growth in corporate tax revenue, but cautions that “future increases will be constrained by higher business costs, especially for employee compensation, which will reduce profit margins and result in lower tax liabilities” . Taxes from the state’s legal marijuana market continue to grow. According to data released by the Colorado Department of Revenue, total marijuana sales surpassed $1.5 billion in 2017 . Of this total, $1.09 billion in sales came from the retail market, while medical marijuana sales were approximately $0.42 billion. Table 2 reports annual marijuana sales and tax revenue data. Sales have increased each calendar year since the retail market began operation on January 1, 2014, but the growth rate has gradually declined each year. In 2014, marijuana sales totaled nearly $680 million. This figure increased by 45.7 percent in 2015 to $990 million. Sales increased by 31.3 percent and surpassed the $1 billion mark for the first time in 2016 to reach a total of $1.3 billion. Total sales in 2017 amounted to $1.5 billion, which represents a 15.3 percent increase. If the trend continues, Colorado could expect to see sales numbers plateau since the growth rate has decreased by an average of 15 percentage points each year. Sales data through September 2018 indicate sales of nearly $1.16 billion. The monthly sales average for 2018 puts the state on track for an annual marijuana sales total of about $1.54 billion, which would be the largest sales amount to date and constitute a slight increase of 2.4 percent from 2017. State revenue from the medical and retail marijuana markets in the form of taxes and fees reached nearly $250 million in 2017. This is a 27.8 percent increase from 2016 following a 48.5 percent increase in tax revenue from 2015 to 2016. During the first 10 months of 2018, the state reported tax and fee revenues of $223 million. A linear projection of tax revenue for the remainder of the year suggests that the annual tax revenue would reach nearly $268 million, an increase of about 8 percent from the prior year. The largest annual increase in marijuana taxrevenue occurred in the second year after retail sales became legal when tax revenue nearly doubled . Increasing sales have driven greater tax revenue, and the state legislature modified the marijuana tax structure in the prior session with the passage of Senate Bill 17-267. From January 1, 2014 through June 30, 2017, retail marijuana sales were subject to the state’s 2.9 percent sales tax and a special 10 percent sales tax in addition to local sales taxes.

Beginning on July 1, 2017, marijuana sales are exempt from the state sales tax and are instead taxed at a special tax rate of 15 percent. A 15 percent excise tax also applies to retail marijuana sale from cultivation facilities to dispensaries for manufacturers. According to the Joint Budget Committee, marijuana tax revenue from the most recent fiscal year was allocated as follows: 49.4 percent to the marijuana tax cash fund, 39 percent to several K-12 education funds, 6.7 percent to local governments, and 4.9 percent to the General Fund.Relative to last year’s budget, Governor Hickenlooper’s budget request for the 2018– 2019 fiscal year is much more optimistic. This positive tone is a product of a good economic climate and the successful passage of a major budgetary reform in the prior session. Transitioning hospital provider fees into a government enterprise fund made these funds TABOR exempt. In the 2016–2017 fiscal year, hospital provider fee revenue was $654.4 million. In future years, these funds do not count toward the TABOR revenue cap. The governor emphasized the importance of this reform by noting that “The passage of S.B. 17-267 has materially and positively changed the State’s financial outlook compared with one year ago, when the request had to close a $500 million funding gap in the General Fund” . Beyond this important budgetary reform, Governor Hickenlooper also lauded the strong upward trajectory of the state’s economy by claiming, “Colorado’s economy continues to outperform nearly every state and the national economy overall” . Statewide unemployment remains low and job creation numbers are strong—approximately 53,000 new jobs are projected for 2018. As the state’s population continues to grow, Hickenlooper’s budget request reflects spending priorities to address increased demand for certain state services. K-12 spending is proposed to increase by $84.6 million, which represents an increase of $343.38 per student. According to state estimates, the 178 school districts in Colorado currently serve the educational needs of more than 865,000 students.

The area under the fluorescence decay curve was calculated for each well

DNA quality was verified by pulsed-field gel electrophoresis. DNA fragments longer than 50 Kb were used to construct a 10× Gemcode library using the Chromium instrument and sequenced at HudsonAlpha Institute for Biotechnology on a HiSeqX system with paired-end 150 bp reads. Approximately 95 Gb of 10X Chromium library data was sequenced . To increase sequence diversity and depth, three separate mate-pair libraries were constructed with 2–5 Kb, 5–7 Kb, and 7–10 Kb jumps using the Illumina Nextera Mate-Pair Sample Preparation Kit. In addition, two additional size-selected Illumina genomic libraries, ∼470 bp and ∼800 bp, were sequenced. The ∼470 bp and ∼800 bp libraries were made using the Illumina TruSeq DNA PCR-free Sample Preparation V2 kit. The ∼470 bp library was designed to produce ”overlapping libraries” after sequencing with paired-end 265 bp reads on an lllumina Hiseq2500 system, producing ”stitched” reads of approximately 265 bp to 520 bp in length. The 800 bp library was sequencedon an Illumina HiSeq2500 system with paired-end 160 bp reads, while the MP libraries were sequenced on an Illumina HiSeq4000 system with paired-end 150 bp reads. A total of ∼433 Gb of additional Illumina sequencing data were generated . Illumina library construction and sequencing was conducted at Roy J. Carver Biotechnology Center, University of Illinois at Urbana-Champaign.The genome of ”Draper” was assembled using the DeNovoMAGIC software platform , hydroponic gutter which is a de Bruijn graph-based assembler designed for higher polyploid, heterozygous, and/or repetitive genomes.

The Chromium 10X data were utilized to phase, elongate, and validate haplotype scaffolds. Four Dovetail Hi-C libraries were prepared as described previously  and sequenced on an Illumina HiSeq X system with paired-end 150 bp reads to a total of 90.7X physical coverage of the genome . The de novo genome assembly, raw genomic reads, and Dovetail Hi-C library reads were used as input data for HiRise, a software pipeline designed specifically for using proximity ligation data to scaffold genome assemblies. Illumina genomic and Dovetail Hi-C library sequences were aligned to the draft input assembly using a modified SNAP read mapper. The separations of Dovetail Hi-C read pairs mapped within draft scaffolds were analyzed by HiRise to produce a likelihood model for genomic distance between read pairs, and the model was used to identify and break putative misjoins and to make joins to close gaps between contigs.Plant Thissue samples were collected from blueberry cv. Draper grown in the growth chamber . For the fruit developmental series, three biological replicates each of berries at seven developmental stages were collected from cv. Draper in a field at the Horticulture Teaching and Research Center, Michigan State University, in July 2017. All plant Thissues were immediately flash frozen in liquid nitrogen, and total RNA isolation was performed using the KingFisher Pure RNA Plant kit . Isolated total RNA was quantified using a Qubit 3 fluorometer . RNA libraries were prepared according to the KAPA mRNA HyperPrep kit protocol . All samples were submitted to the Michigan State University Research Technology Support Facility Genomics core and sequenced with paired-end 150 bp reads on an Illumina HiSeq 4000 system .The draft genome of V. corymbosum cv. Draper was annotated using the MAKER annotation pipeline. Transcript and protein evidence used in the annotation included protein sequences downloaded from A. thaliana and UniprotKB plant databases, V. corymbosum ESTs from NCBI, and transciptome data assembled with StringTie from different blueberry Thissues .

A custom repeat library and Repbase were used to mask repetitive regions in the genome using Repeatmasker. Ab initio gene prediction was performed using gene predictors SNAP and AugusThus. The resulting MAKER Max gene set was filtered to select gene models with Pfam domain and annotation edit distance <1.0. The filtered gene set was further scanned for transposase coding regions. The amino acid sequence of predicted genes was searched against a transposase database. The alignment between the genes and the transposases was further filtered for those caused by the presence of sequences with low complexity. The total length of genes matching transposases was calculated based on the output from the search. If more than 30% of gene length aligned to the transposases, the gene was removed from the gene set. Furthermore, to assess the completeness of annotation, the V. corymbosum Maker standard gene set was searched against the BUSCO v.3 plant dataset . Genes were annotated with pfam domains using InterProScan v5.26–65.0 .To identify and classify repetitive elements in the genome, LTR retrotransposon candidates were searched using LTR harvest and LTR finder  and further identified and classified using LTR retriever. A non-redundant LTR library was also produced by LTR retriever. Miniature inverted transposable elements were identified using MITE-Hunter . MITEs were manually checked for target site duplications and terminal inverted repeats and classified into super families . Those with ambiguous Target Site Duplication and Terminal Inverted Repeats were classified as ”unknowns.” Using the MITE and LTR libraries, the V. corymbosum genome was masked using Repeat masker. The masked genome was further mined for repetitive elements using Repeat modeler. The repeats were then categorized into two groups: sequences with and without identities. Those without identities were searched against the transposase database; if they had a match, they were considered a transposon.

The repeats were then filtered to exclude gene fragments using Prot Excluder and summarized using the ‘fam coverage.pl’ script in the LTR retriever package. The assembly continuity of repeat space was assessed using the LLAI deployed in the LTR retriever package. LAI was calculated based on either 3 Mb sliding windows or the whole assembly using LAI = /Total LTR-RT length. For the sliding window estimation, a step of 300 Kb was used . To account for dynamics of LTR retrotransposons, LAI was adjusted by the mean identity of LTR sequences in the genome based on all-versus-all blastn search, which was also performed by the LAI program .Illumina adapters were removed from the raw reads using Trimmomatic/0.33, and trimmed reads were filtered using FASTX Toolkit. After quality assessment using FastQC , the filtered reads were then aligned to the V. corymbosum genome using STAR. For the samples that were usedfor annotation, transcript assembly was performed de novo using StringTie. Counts of uniquely mapping reads were generated through HTSeq for all 35 RNA-seq datasets . Multi-mapping reads were excluded from the analysis except for the tandem gene expression analysis. Differential gene expression analysis was performed using the DESeq2 pipeline across fruit developmental stages with three biological replicates per developmental stage . Gene expression values were derived by calculating the fragments per kilobase per million reads mapped values using the standard formula for FPKM /gene length in kilobases [Kb]. To construct the gene co-expression network, genes that were not expressed or very weakly expressed in 30 or more conditions were first excluded from the analysis. The count data was then transformed into variance stabilized values using the variance stabilizing transformation function in DEseq. Pairwise correlations of gene expression were calculated using Pearson correlation coefficient and mutual rank using scripts available for download from the project’s data repository. MR scores were transformed to network edge weights using geometric decay function; five different co-expression networks were constructed with x set to 5, 10, 25, 50, and 100, respectively. Edges with PCC <0.6 or edge weight <0.01 were excluded. For each network, modules of coexpressed genes were detected using ClusterONE v1.0 using default parameters, u planting gutter and modules with P value > 0.1 or quality score <0.2 were excluded. The results from all co-expression networks were then combined by collapsing modules into metamodules of nonoverlapping gene sets.Total antioxidant capacity of Thissues from the fruit developmental panel was analyzed using the ORAC assay. Briefly, ∼20–30 mg of frozen ground fruit Thissue was measured for Thissue samples prior to extraction. Sample extractions were performed on ground Thissue using 1.8 mL of ice cold 50% acetone. Samples were vortexed and then put on a shaker for 5 minutes at room temperature. Samples were then centrifuged at 4◦C for 15 minutes . The ORAC assay was performed in a 96-well black microplate using the FLUOstar OPTIMA microplate reader . Each reaction well contained 150 μL of 0.08 μM fluorescein and 25 μL of 75 mM phosphate buffer , Trolox standards , or diluted sample extracts. For blueberry Thissue samples, 1:80–1:20 dilutions were used. Upon loading all appropriate wells, the 96-well microplate was put into the microplate reader and incubated for 10 minutes at 37◦C. Following incubation, 25 μL of 150 mM AAPH was added to each well, and fluorescence measurements began immediately. Fluorescence measurements were taken for 90 seconds per cycle for 70 cycles until the fluorescent probe signal was completely quenched.

The total antioxidant capacity of a sample was calculated by subtracting the AUC from the blank curve from the AUC of the sample curve to obtain the net AUC. Using Trolox of a known concentration, a standard curve was generated , and the total antioxidant capacity of each sample was calculated as Trolox equivalents. Each sample was run twice for two technical replicates. The coefficient of variation between technical replicates was required to be less than 0.20. Biological replicates were run for all Thissues in the fruit developmental series.Berries from ”Draper” were collected as described above. Approximately 100 mg of each frozen ground sample was resuspended in extraction solvent in a 2 mL tube . Ground Thissue was immediately mixed thoroughly to prevent thawing during extraction and to prevent metabolism of analytes by enzymes in the samples. All tubes were spun down for 10 minutes at 13,000 × g to pellet protein and other insoluble material. Then, 1 mL of supernatant was transferred to an autosampler vial. Anthocyanin content was evaluated by LC-MS as follows: 5 uL of sample extract were separated using a 10 minute gradient on a Waters Acquity HSS-T3 UPLC column on a Waters Acquity UPLC system interfaced with a Waters Xevo G2-XS quadrupole time-of-flight mass spectrometer . Column temperature was maintained at 40◦C, and the flow rate was 0.3 mL/min with starting conditions of 100% solvent A and 0% solvent B . The gradient was as follows: hold at 100% A for 0.5 minutes, ramp to 50% B at 6 minutes, then ramp to 99% B at 6.5 minutes, hold at 99% B to 8.5 minutes, return to 100% A at 8.51 minutes, and hold at 100% A until 10 minutes. Mass spectra were acquired in positive ion mode electrospray ionization over m/z 50–1500 in continuum mode using a data-independent MSE method that acquires data under both low and high collision energy conditions with the high collision energy setting using a ramp from 20–80 V. Capillary voltage was 3 kV, desolvation temperature was 350◦C, source temperature was 100◦C, cone gas flow was 25 L/hr, and desolvation gas flow was 600 L/hr. Correction for mass drift was performed using continuous infusion of the lock mass compound leucine encephalin. Anthocyanins and other related flavonoids were identified based on accurate mass and fragmentation pattern. Peak areas were determined using Quanlynx within the Masslynx software package . Relative anthocyanin content was calculated for each sample using the formula: reported peak area of the compound/peak area of internal standard/weight of extracted Thissue .Cardiovascular disease remains the leading cause of mortality in the United States, and after decades of decline, is rising coincident with the increase in obesity, insulin resistance, and diabetes that characterize cardiometabolic risk. Notwithstanding hereditary predisposition, reduction in identified, modifiable lifestyle risk factors can reverse CMR and CVD. It is estimated that 45.4% of all cardiometabolic deaths in the United States due to heart disease, stroke, and diabetes are associated with sub-optimal intakes of 10 dietary factors. Fewer than 1% of American children and adolescents meet full recommended metrics of heart healthy nutrition, falling especially short of recommended intake in the categories of fruits, vegetables, fiber and essential fatty acids. Intensive pediatric lifestyle interventions for obesity are effective in achieving significant reduction in body mass index but do not elicit stable changes in nutrition habits in children and adolescents. These studies suggest a critical need for developing innovative tools to improve diet quality in youth. We have previously shown that twice daily consumption for two weeks of a whole food based nutrient bar composed of a blueberry, dark chocolate, red grape, and walnut matrix, soluble and insoluble fiber, with supplemental vitamins, minerals and essential long chain fatty acids, significantly increased high density lipoprotein cholesterol , due primarily to a 28% increase in large HDL particles, in generally healthy and insulin sensitive lean and overweight adults.

The content of bio-active compounds in plant foods is highly influenced by genetics

This condition can cause a feedback loop in which contact between bacteria and epithelial cells leads to dysregulation of mucosal immune response. This contact can lead to a bacterial biofilm, formed when bacteria attach themselves to the surfaces of the aqueous environment in the gut and begin to secrete substances that allow them to affix onto the epithelium. The interaction between bacteria and epithelial cells elevates inflammation, leading to increased thinning of the mucus and direct host-bacteria interaction. The thali approach, however, combats this cycle in two different ways: by suppressing bacterial growth with anti-microbial phytochemicals , and by reducing the opportunity for inflammation to occur. One molecular pathway involved in such a cycle involves interleukin 6 . This cytokine is normally expressed during acute inflammatory responses, and among other effects, upregulates the transcription factor STAT3. In the nucleus, STAT3 promotes cell prolifteration and differentiation as well as upregulating anti-apoptosis genes. When IL6 is chronically elevated, it can lead to an apoptosis-resistant, constantly expanding T-cell population in the intestinal mucosa. These cells can further contribute to chronic inflammation. Just as a certain diet may promote chronic inflammation, a change in diet can help to restore health. Various bio-active compounds, including anthocyanins, have demonstrated antioxidant activity, reducing local amounts of reactive oxygen species. Low levels of reactive oxygen species can lower the expression of some inflammatory genes, including IL6, bato bucket and relieve the stresses on both the intestinal microbiota and epithelial cells caused by chronic inflammation.

In a study of pigs, we found that supplementing a high-calorie diet with purple potatoes that contains anthocyanins led to a six-fold reduction in levels of interleukin-6 compared to high-fat diet control. Colorectal cancer killed nearly 774,000 people worldwide in 2015, and nearly an estimated 50,630 deaths in 2018 in America making it the third leading cause of cancer-related deaths in the United States in women and second in men. Virtually all cases of CRC are considered to result from an interplay of exogenous and endogenous factors with respect to the variable contribution from each factor . Some non-modifiable risk factors include old age and family history of CRC. Other risk factors, however, are associated with lifte style or behaviors and thus can be changed. These modifiable risk factors include smoking, obesity, low physical activity, deficiency of dietary fiber, deficiency of vitamin D, deficiency of folate, high intake of red and processed meat, and alcohol consumption. Some of these risk factors, however, are closely related. For example, inadequate fiber intake and excessive fat intake are dietary risk factors which tend to lead to a lack of exercise which ultimately may contribute to obesity, particularly in combination. In the US, 40 percent of adults are obese, and so the risk factors discussed are common mainly due to the modern Western lifte style. Therefore, it is no surprise that nearly half of the CRC cases arise in the developed nations. The Western diet in its current form contains more risk factors than the calorie and fat content. Foods that contain heterocyclic amines , polycyclic aromatic hydrocarbons , and emulsifiers can also contribute to carcinogenesis. HCA and PAH are produced in meats when they are fried or grilled over an open flame. These substances have been proved to damage the DNA of colonocytes and potentially promote risk of colon cancer. Emulsifiers are used in foods like ice cream to ensure an even distribution of fat molecules. Recent evidence suggests, however, that emulsifiers promote intestinal inflammation, creating an environment that favors colon carcinogenesis in mice. Some of these risk factors, however, are closely related. For example, inadequate fiber intake and excessive fat intake are dietary risk factors. These tend to lead to a lack of exercise, which ultimately contributes to obesity. In the US, 40 percent of adults are obese, and so the risk factors discussed are common mainly due to the modern Western life style. Therefore, it is no surprise that nearly half of CRC cases arise in developed nations. However, colon cancer has a long development period . This gives ample time for life style changes to take place, including diet-based intervention.

Chronic inflammation, a condition that is promoted by dietary risk factors also contributes to the development of cancer, even in humans. Patients with inflammatory bowel disease have a significantly increased risk of developing CRC, while long-term aspirin treatment is associated with a significantly decreased risk of CRC . The mechanisms by which chronic inflammation promotes tumor development often involve the immune system. For example, the IL6/STAT pathway discussed earlier is also implicated in cancer formation. Over expression of IL6 leads to excess STAT3 transcription, causing unwanted cell prolifteration not only in T cells but also in the intestinal epithelium. Another inflammatory cytokine of note is TNF α. While the intestinal bacteria can promote inflammation, they may also affect the likelihood of CRC more directly. Once the intestinal mucus layer is thinned, and direct bacterial-epithelial cell interactions occur, certain bacterial strains promote tumor development. E. coli strains bearing the pks island are of particular interest. This genetic locus codes for the secondary metabolite colibactin, along with the enzymes necessary for its production. Colibactin has been shown to crosslink with DNA, producing double-stranded breaks. Furthermore, pks+ E. coli strains have been shown to be prevalent in CRC patients. In one study, nearly two-thirds of CRC patients had pks+ E. coli strains in their intestinal bacteria. In the same study, pks+ E. coli also existed in about 20 percent of healthy individuals. Colibactin, however, is a reactive and short-lived protein, requiring close contact with epithelial cells to cause DNA damage. A healthy mucosal barrier keeps colibactin at a distance and reduces the chance of affecting the intestinal epithelium. Evidence for the pathogenic relationship between diets, Fusobacterium nucleatum, and CRC has been emerging. The F. nucleatum levels have been shown to be higher in CRC than in adjacent normal mucosa. Utilizing the molecular pathological epidemiology paradigm and methods, a recent study has shown the association of fiber-rich diets with decreased risk of F. nucleatum-detectable CRC, but not that of F. nucleatum-undetectable CRC .

Experimental evidence supports a carcinogenic role of F. nucleatum, as well as its role in modifying therapeutic outcomes. The amount of F. nucleatum in CRC Thissue has been associated with proximal tumor location, CpG island methylator phenotype , microsatellite instability, low-level CD3+ T cell infiltrate, high-level macrophage infiltration, and unfavorable patient survival . The amount of F. nucleatum in average increased in CRC from rectum to cecum, supporting the colorectal continuum model. Future studies should examine the role of diets, microbiota, and CRC in detailed tumor locations. Dietary prevention of CRC, then, has two intertwined aims: to reduce inflammation and to promote a healthy intestinal microbiota. As already discussed, preclinical evidence implies that dietary bio-active compounds, particularly anthocyanins, can reduce symptoms of lowgrade chronic inflammation as well as oxidative stress. It can also aid in balancing the intestinal microbiota by promoting the growth of beneficial bacteria and by reducing the populations of pro-inflammatory bacteria. Clinical trials have had mixed results, but anthocyanins and some polyphenols have shown to counteract against CRC actively. More research, however, is necessary for conclusive results. How, then, are individuals to consume enough bio-active compounds to have an effect on health? Some answers may be found in the food consumption practices of cultures with historically low CRC incidence. Parts of India, for example, have had some of the lowest CRC incidence rates in the world ; however, this status has been changing. In recent decades, dutch bucket hydroponic increasing urbanization and similar factors have led to progressively Westernized diet patterns and lifestyle. CRC incidence rates are similarly rising, lending weight to the hypothesis that the traditional Indian diet may help prevent CRC. Furthermore, Indian immigrants to Western countries have a much higher incidence of CRC compared to Indians in India. Typical components of traditional Indian meals include a broad variety of flavors, as promoted in Ayurvedic medicine, and a variety of other foods. Both are facilitated by using a thali platter to serve the meal. The traditional American main meal includes an entree , one or more carbohydrates , and one or more vegetables. This basic structure can potentially be adapted with inspiration from thali meals by reducing the size of the main dish and serving more vegetables, legumes, pulses, herbs, and spices to accompany it. A unique component to thali is the combination of many tastes and colors. The inclusion of multiple colors in a meal is desirable, because certain bio-active compounds, particularly anthocyanins are also pigments. Blue, purple, and red-purple colors in plant foods indicate high anthocyanin content. Purple-pigmented potatoes can be prepared in the same way as traditional white potatoes, but the anthocyanin content is significantly higher in the pigmented varieties. Purple sweet potatoes also contain more anthocyanins than the more common orange varieties and can be easily substituted for them. Other vegetables with red or purple cultivars include carrots, cauliflower, and cabbage. Different colors can indicate the presence of other bio-active compounds, such as orange , yellow , and red/pink . Thus, healthy bio-active compound consumption may be increased by selecting colorful vegetables. Another way to increase consumption of bio-active compounds is to increase their presence in available foods. The agricultural industry could greatly impact health by adopting food plant cultivars that produce bio-active compounds in larger amounts than is currently common.

New cultivars may need to be developed that retain desirable characteristics such as large size, pest resistance, reduced spoilage, etc., but also have high bio-active content at the time of consumption. bio-active compounds, with some exceptions, tend to deteriorate during storage. Even when compounds have not deteriorated, storage may reduce the anti-inflammatory/antioxidant activity of bio-active compounds to affect health. A second systemic change that would promote increased bio-active compound consumption involves reworking how fruits and vegetables are currently stored and processed, as well as reducing the average storage time and adapting processing to optimize the amount of bio-active compounds. Presently, “nutritional adequacy” does not consider many of the bio-active compounds discussed in this paper. Further clinical studies are needed to support and elucidate the role of bio-active compounds in the prevention and treatment of disease.More than three quarters of all plant viruses are transmitted by insects , and information regarding key biological traits of vector-borne pathogens is needed to inform effective control strategies. For example, knowledge of transmission efficiency can aid in predicting rates of pathogen spread . Another key parameter in estimating the rate of appearance of newly diseased hosts is the pathogen incubation period, the time between initial infection and when symptoms become evident. Despite the importance of transmission efficiency and incubation period with respect to the development of disease management strategies, data are often not available and, when available, are usually derived from research performed under artificial conditions such as greenhouse environments. Grapevine leaf roll-associated virus 3 , in the genus Ampelovirus, family Closteroviridae, is the primary virus species associated with grapevine leaf roll disease in vineyards of wine growing regions worldwide . GLRaV-3 can cause interveinal reddening and downward rolling in red berried grape varieties , inhibits photosynthesis, decreases vine lifespan, and reduces fruit yield and quality . GLRaV-3 is one of the most common and detrimental viruses of grapevines, and has led to economic losses of 25 % or more . Spread of GLRaV-3 in vineyards and vector-borne transmission in controlled laboratory studies were first documented in South Africa , and since then GLRaV-3 spread in vineyards and transmission by several mealybug species have been documented in wine growing regions worldwide . Although multiple grape-colonizing mealybug species transmit GLRaV-3, estimates of vector transmission efficiency vary both among and within mealybug species . GLRaV-3 is transmitted in a semi-persistent manner with no latent period required between acquisition and inoculation by vectors; transmission can occur after access periods of as little as one hour, and reaches a maximum after access periods of 24 hours . First instar mealybugs are the most efficient vectors, and mealybugs lose the ability to transmit GLRaV-3 four days after being removed from an infected source .

It is unclear how many samples would be needed to accurately determine infestation levels

Each experimenter tested half of the days. We used high quality rewards that would be easily visible to the subjects. Blueberries were not part of their daily diet but were sometimes presented as enrichment in puzzle feeders and were highly desirable for all gibbons housed at the GCC. The apparaThus was composed of a plastic folding table with a square wooden plank clamped to the top. At one end of the plank a transparent plastic bin was taped so that it could be lifitted up or hang down. Te bin, at rest, would hang down and remain unmoved on the top of a wooden ramp. A hole big enough to ft blueberries was drilled on the back side of the bin so that when at rest on the ramp, the experimenter could place five blueberries into the bin. A thin purple rope was tied to the far end of the plastic bin and was routed back to the opposite end of the wooden plank. This was set up so that pulling on the purple rope would reliably lift the plastic bin, so blueberries could fall down the wooden ramp and be easily accessible for subjects to obtain. Te extreme end of the rope was attached to the mesh of the enclosure. To allow reaching and pulling the rope, we attached a small, handheld, opaque white handle . At the right tension, pulling on the handle would reliably lift the plastic bin. Te handle could contain a single blueberry inside depending on the condition presented. We used two handles of the same dimensions and appearance to avoid contamination of blueberry leftovers after the trial. Te table with wooden plank would be set up at a distance so that it could not be grabbed by subjects and the ramp was placed underneath so that blueberries would roll down and land in front of the enclosure gate.

E2 would then distract the two subjects to an opposite or adjacent side of the subjects’ enclosure with a handful of cereal pieces while E1 tied the end of the purple rope with the handle onto the mesh gate of the enclosure, roughly at the experimenter height, blueberry packing boxes approximately 2 m to the right or left. Te distance and location of the rope was kept constant for all trials of each dyad; however, because the enclosures differed in layout, the rope would go to the most convenient side. This way, we ensured that the rope had proper tension to be pulled by gibbons and lift the plastic bin as well as be distant enough from the ramp so that a subject could not easily pull on the rope and obtain food from the ramp at the same time.Individual solo pre-testing of the mechanism of the apparaThus was not possible because the separation of the days was prohibited. However, gibbons had had experience with ropes before as part of their enrichment and several individuals had participated in pilot sessions where they had to pull from different ropes and handles. Tree conditions were tested: direct food test condition, indirect food test condition and no food control condition. In the direct food test condition, the following procedure was performed. E1 would place five blueberries in the plastic bin on the apparaThus. To gain the attention of the subjects, E1 would call the subjects names and show the food, if they were not already focused on the food/experimenter. Once both subjects had observed the five blueberries placed in the plastic bin, E1 would squeeze a single blueberry on top of the handle, so that the blueberry would be clearly visible. Te rope and handle would be set up so that the handle was just far enough from the enclosure in order for subjects to need to pull on the rope to obtain access to the handle and blueberry. Consequently, pulling the rope would also lift the plastic bin and drop five blueberries down the ramp, accessible to subjects. Te experimenter would also call the names of the subjects when placing the single blueberry in the handle.

A choice was recorded when one of the subjects pulled the rope. If no subject pulled the rope within 90 s, the trial ended and was recorded as no pull. If an experimenter error was made , up to 3 repetitions of the trial would be completed. Environmental conditions such as rain would also end test sessions to be continued the next day. In the indirect food test condition, there was no single blueberry placed in the handle. To compare conditions, we followed the same procedure as in the direct food test condition. Instead of inserting a blueberry inside the handle, we approached it with the first close and then we touched it with the fingers. In the no food control condition, no blueberries were used in the trial. In order to control for time and actions, we used the same procedure of calling the subjects and touching both the box and the handle.Two cameras on tripods recorded footage concurrently. One was placed to the side of the experimenter in order to capture a wide view of the trials, specifically to show the positions of the subjects, their choices and if they obtained blueberries. Te other was placed close to the ramp to accurately count the quantity of blueberries obtained by each subject. For all trials we coded the act of pulling or not pulling and the ID of the puller and non-puller . We also coded the number of blueberries each subject ate and whether the actor subject ate the blueberry from the handle. Next, we coded whether a passive subject was present in front of the ramp or within one meter from it at the moment the plastic bin was lifted and at the moment the actor arrived at the release location. Additionally, we coded instances of cofeeding and displacements. Cofeeding was coded when individuals feed within a distance of 1 m of one another. Displacements occurred when an individual left her spot due to the partners’ arrival. Additionally, we calculated the latency to pull from the start of the trial until the individual releases . All analyses were conducted with R statistics . We used Generalized Linear Mixed Models to investigate gibbons’ choices . Covariates were z-transformed. Every full model was compared to a null model excluding the test variables. We controlled for session and trial number in all our models. We controlled for the length of the dyad in models 1 to 3 given the larger dataset compared to models 4 to 6. In addition, in model 3 we included individuals’ age and sex as control predictors.

When the comparison between the full and the null model was significant, we further investigated the significance of the test variables and/or their interactions. We used the “drop1” function of the lme4 package to test each variable significance including interactions between test predictors. Non-significant interactions were removed and a new reduced model was produced when necessary. A likelihood ratio test with significance set at p<0.05 was used to compare models and to test the significance of the individual fixed effects. We ruled out collinearity by checking Variance Infation Factors . All VIF values were close to 1 except for age and length of dyad in model 3. Te two variables were slightly collinear . For every model we assessed its stability by comparing the estimates derived by a model based on all data with those obtained from models with the levels of the random effects excluded one at a time. All models were stable. We also fitted a mixed-effects Cox proportional hazards model to analyze gibbons’ latencies to act. For this purpose, we used the “coxme” function from the coxme package. Te results of Model 2 are reported as hazard ratios . An HR greater than one indicates an increased likelihood of acting and an HR smaller than 1 indicated a decreased hazard of acting. In addition, to obtain the p-values for the individual fixed effects we conducted likelihood-ratio tests.Drosophila suzukii Matsumura is an economic pest of small and stone fruit in major production areas including North America, Asia and Europe . Female D. suzukii oviposit into suitable ripening fruits using a serrated ovipositor. This is unique compared to other drosophilids, including the common fruit fly, D. melanogaster, package of blueberries which oviposit into overripe or previously damaged fruit. Developing fruit fly larvae render infested fruit unmarketable for fresh consumption and may reduce processed fruit quality and cause downgrading or rejection at processing facilities. In Western US production areas, D. suzukii damage may cause up to $500 million in annual losses assuming 30% damage levels, and $207 million in Eastern US production regions [9]. Worldwide, the potential economic impacts of this pest are staggering. Pesticide applications have been the primary control tactic against D. suzukii both in North America and in Europe. The most effective materials are those that target gravid females, including pyrethoids, carbamates, and spinosyns. These applications are timed to prevent oviposition in susceptible ripening host crops. In the Pacific Northwest, many growers have adopted scheduled spray intervals of 4–7 days. This prophylactic use of insecticide is unsustainable as growers have a limited selection of products and modes of action. This could ultimately lead to D. suzukii becoming resistant and may cause secondary pest problems because of negative effects on beneficial organisms. Furthermore, production costs have increased substantially in crops where D. suzukii must be managed. Effective sampling methodology for D. suzukii is lacking despite extensive efforts to improve trap technology or determine effective fruit infestation sampling protocols. Theoretically, traps to capture adult flies should aid growers in the timing of spray applications so that insecticides could be used more judiciously. Traps baited with apple cider vinegar or a combination of sugar-water and yeast are currently used to monitor adult D. suzukii flight patterns. However, without standard methods for trapping or management thresholds based on trap count data, it is questionable how much is gained by establishing and monitoring traps in crops.

Establishing, monitoring, and maintaining traps is very labor intensive and the costs do not justify the benefits for many growers. Historically, trap data has not provided a reliable warning against D. suzukii attack, especially for susceptible crops in high-density population areas where considerable oviposition can occur in short time periods. Currently, no significant differences are found in any traps used for monitoring D. suzukii given differences between crops and environments where traps have been tested. Monitoring fruit infestation levels to guide management may also be impractical. Furthermore, by the time larvae are detected in the fruit, it is too late for management action and damage has already occurred. No detailed studies could be found using monitoring for fruit infestation for this pest, and precision of sampling methodology is currently unavailable. Degree-day , or phenology models, are standard tools for integrated pest management in temperate regions and are used to predict the lifte stages of pests in order to time management activities and increase the effectiveness of control measures. Degree-day models work best for pests with a high level of synchronicity and few generations. Our data suggest that D. suzukii has short generation times, high reproductive levels, and high generational overlap compared to other dipteran fruit pests. Given this lift history, stage-specific population models represent an alternative and potentially more applicable tool for modeling pest pressure. Pest population estimates may be greatly improved by employing additional tools such as mark recapture and analytical or individual-based models. The ability to describe and forecast damaging pest populations is highly advantageous for fruit producers, policy makers, and stakeholder groups. Many such studies have been directed at forecasting populations of medically important insect species. The major factors affecting survival, fecundity and population dynamics of drosophilids include temperature, humidity, and the availability of essential food resources. Therefore, an improved understanding of the role of temperature on D. suzukii may provide for a better understanding its seasonal population dynamics. In this paper, we present a population model for D. suzukii that represents a novel modification of the classic Leslie projection matrix, which has proven to be one of the most useful age structured population models in ecology, with applications for diverse organisms including plants, animals, and diseases.

The workers on every level of the ladder worry about factors over which they lack control

The ethnic-labor hierarchy seen here—white and Asian American US citizen, Latino US citizen or resident, undocumented mestizo Mexican, undocumented indigenous Mexican—is common in North American farming. The relative status of Triqui people below Mixtecos can be understood via a pecking order of perceived indigeneity. For example, many farm workers and managers told me the Triqui are more ‘‘purely indigenous’’ than other groups, Triqui is still their primary language, and ‘‘they are more simple.’’ Ethnicity functions as a camouflage for perceived indigeneity versus civilization. The Anglo and Japanese Americans inhabit the pole of civilization, modernity. The Triqui are positioned as the opposite, indigenous peasants, savages, simpletons. The more modern one is perceived to be, the better one’s job. As illustrated in Figure 3, this hierarchy of modernity also correlates roughly with citizenship from US citizen to US resident=Mexican citizen to undocumented immigrant=Mexican citizen . Yet, this diagram shows only a small piece of the global hierarchy. The continuum of structural vulnerability can be understood as a zoom lens, moving through many such diagrams. When the continuum is seen from furthest away, it becomes clear that the farm owners are near the bottom of the global corporate agribusiness hierarchy. When looked at more closely, garden grow bags we see the hierarchy on this particular farm.Responsibilities, stressors, and privileges differ from the top to the bottom of this hierarchy.

Everyone on the farm is structurally vulnerable, although the characteristics and depth of vulnerability change depending on one’s position within the labor structure. Control decreases and anxieties accumulate as one moves down the pecking order. Those at the top worry about market competition and the weather. The middle managers worry about these factors as well as about how they are treated by their bosses. The pickers also worry about picking the minimum weight in order to avoid losing their job and their housing. The higher one is positioned in the structure, the more control over time one has . The executives and managers can take breaks as their workload and discretion dictate. The administrative assistants and checkers can take short breaks, given their supervisor’s consent or absence. The field workers can take infrequent breaks if they are willing to sacrifice pay, and even then they may be reprimanded. The higher one is located in the hierarchy, the more one is paid. The executives and managers are financially secure with comfortable homes. The administrative staff and checkers are paid minimum wage and live as members of the rural working class in relatively comfortable housing. The pickers are paid piecemeal and live in labor camp shacks. They are constantly aware of the risk of losing even this poor housing. Among pickers, those in strawberries make less money and are more likely to miss the minimum and be fired than those in apples. This segregation is not conscious or willed on the part of the executives or managers. Rather, inequalities and the anxieties they produce are driven by larger structural forces. While farm executives are vulnerable to macro-social structures, vulnerability is further conjugated through ethnicity and citizenship, changing character from the top to the bottom of the labor hierarchy . Bodies are organized according to the social categories of ethnicity and citizenship into superimposed hierarchies of labor possibilities and housing conditions.

The over determination of the adverse lot of indigenous Mexican migrant berry pickers tracks along the health disparities seen throughout the public health literature on migrant workers . The focus on risk and risk behaviors in public health and medicine carries with it a subtle assumption that the genesis of vulnerability and suffering is the individual and his or her choices . This focus often leads to blaming inadvertently the individual victim or their ‘‘culture’’ for their structurally produced suffering . Public health and medical interventions are planned with the goal of changing individual choices, behaviors, and values. The concept of structural vulnerability, on the other hand, refocuses our analysis onto the social structure as the locus of danger, damage, and suffering. Without such a concept, diagnoses and interventions rarely correspond with the context of suffering and may instead comply with the very structures of inequality producing the suffering in the first place . The concept of structural vulnerability is crucial not only to refine anthropological analyses of the social production of suffering but also to reorient medical and public health attention away from individual behaviors and toward social structures.Farmland covers more than 35% of Earth’s ice-free terrestrial area, and agriculture is expanding and intensifying in many regions to meet the growing demands of human populations . This trend threatens biodiversity and the ecosystem services on which agriculture depends, including crop pollination . Indeed, recent reviews have highlighted how multiple anthropogenic pressures lead to a decline in wild pollinators such as bees, flies, beetles, and butterflies . However, practices to enhance wild pollinators in agroecosystems are still in development , and considerable uncertainty remains regarding their effects on crop yield and farmers’ profits.

Here we review recent research on the topic, including the impacts of certain practices on wild pollinators, crop pollination, yield, and profits . We focus on practices that enhance the carrying capacity of habitats for wild insect assemblages that may then provide crop pollination services; practices to conserve or manage a particular pollinator species are outside our scope although they have received attention elsewhere . We offer general science-based advice to land managers and policy makers and highlight knowledge gaps. Throughout, we emphasize the need to consider population-level processes, rather than just short-term behavioral responses of pollinators to floral resources.Plant–pollinator interactions are typically very general, with many pollinators being rewarded with pollen, nectar, or other resources from several plant species , and with most angiosperms being pollinated by multiple insect species . Humans benefit from this generalized nature of pollination systems, as exotic crops brought far from their ancestral ranges can find effective pollinators within native insect assemblages . Accordingly, a synthesis of 600 fields from 41 crop systems showed that only two of the 68 most frequent pollinators globally were specialist species: the weevil Elaeidobius kamerunicus pollinating oil palm and the squash-bee Peponapis pruinosa pollinating pumpkin .Because of differences in species functional traits, greater pollinator richness can lead to foraging complementarity or synergy, improving the quantity and quality of pollination and therefore increasing both the proportion of flowers setting fruits and product quality . Across crop species, insects with contrasting mouth part lengths may be needed for the pollination of flowers not only with easily accessible rewards but also with rewards hidden at the bottom of a tubular corolla . Within a crop species, social and solitary bees visited flowering radish plants at different times of day, suggesting temporal complementarity among these pollinator groups . Flower visiting behavior also differs among pollinators of different body sizes, and visits by a range of differently sized pollinator species increase pumpkin pollination . In addition to functional traits, interspecific differences in response traits to climate and land-use change can increase resilience of pollination services . The role of diverse assemblages of wild insects in crop pollination is also evident from recent global analyses. Worldwide, incomplete and variable animal pollen delivery decreases the growth and stability of yields for pollinator-dependent crops . This lower yield growth has been compensated for by greater land cultivation to sustain production growth . The consequent reduction in natural areas within agricultural landscapes decreases the richness and abundance of wild pollinators, including bees, syrphid flies, and butterflies , further diminishing crop pollination . A possible solution to this “vicious cycle” is to increase pollinator abundance through single-species management, most commonly European honey bees , tomato grow bags which are not greatly affected by isolation from natural areas . However, increasing the abundance of one species may complement but not replace the pollination services provided by diverse assemblages of wild insects, and wild insects pollinate some crops more efficiently than honey bees . Moreover, during the past 50 years, the fraction of animal-pollinator dependent agriculture and the number of managed honey bee hives have increased 300% and 45%, respectively, and honey bees have suffered from major health problems such as colony collapse disorder . All of these factors point to the potential benefit of practices that boost the species richness and abundance of wild pollinators. Indeed, richness and visitation rate of wild pollinators are strongly correlated across agricultural fields globally . Therefore, practices that enhance habitats to promote species richness are also expected to improve the aggregate abundance of pollinators, and vice versa .Below we describe practices that diversify and improve the abundance of resources for wild insects outside the crop field, without affecting crop management. Practices are ranked from less-to-more required area, with practices covering less area likely to be less costly . Nesting resources – such as reed internodes and muddy spots for cavity nesters, and bare ground for soil nesters – can be enhanced at crop field edges without affecting much of the crop area. Although providing such resources can promote the recruitment of certain bee species , evidence of its effects on crop yield is lacking .

Hedgerows and flower strips are woody or herbaceous vegetation, respectively, planted at the edge of a crop field, and generally covering only a small area. If appropriate plant species are chosen and adequately managed through time , hedgerows and flower strips can provide suitable food and nesting resources for, and enhance species richness and abundance of, bees and syrphid flies . These practices also enhance pollinators in adjacent fields – rather than simply concentrating pollinators at dense flower-rich regions – and therefore increase crop yield . Regional programs that augment the quality and availability of seeds from native flowering plants are important for the success of these practices . Conserving or restoring natural areas within landscapes dominated by crops often provides habitat for wild pollinator populations . In addition, pollinators depend on various types of resources , which are difficult to provide in ways other than by enhancing natural areas. Consequently, these areas also enhance pollination services for nearby crops . Enhancing farmland heterogeneity increases pollinator richness because plant species provide complementary resources over time and space, and insect species use different resource combinations . Also, insects usually require resources for periods longer than crop flowering . In fact, a synthesis of 605 fields from 39 crop systems in different biomes found that diversity of habitats within 4 ha enhanced bee abundance by 76% as compared with bee abundance in monoculture fields . Smaller crop fields increase land-use heterogeneity, and also benefit pollinators because most species forage at distances less than 1 km from their nests . Thus, crops in small fields are more likely to benefit from pollinator enhancements such as nearby field margins and hedgerows . Indeed, pollinator richness, visitation rate, and the proportion of flowers setting fruits decreased by 34%, 27%, and 16%, respectively, at 1 km from natural areas across 29 studies worldwide .In contrast to off-field methods that can be ordered from smaller to larger scale , on-field practices are all applied at a similar spatial scale, ie that of the crop field. Here we discuss practices that reduce the use of insecticides and machinery, enhance the richness of flowering plants, and require greater effort because of changes in the crop species or system . Reducing the use of synthetic insecticides that are toxic to pollinating insects should provide an important benefit . For example, in South Africa, insecticides adversely affected pollinators, impairing rather than enhancing mango yield . Insecticides with low toxicity to pollinators, with non-dust formulations, applied locally through integrated pest management practices, and applied during the non-flowering season are less likely to be detrimental to pollinators than highly toxic, systemic insecticides that are broadly sprayed from airplanes . No-tillage farming may enhance populations of ground nesting bees given that many species place their brood cells <30 cm below the surface . Tillage timing, depth, and method probably have differential impacts on pollinators and pollination, but further studies are required to verify this expectation . Similarly, flood irrigation may be detrimental in comparison to drip irrigation because of the increased likelihood of flooding pollinator nests but, particularly in arid systems, irrigation in general can promote wild-insect abundance through higher productivity of flowering plants or by making the soil easier to excavate .

Their current research is focused on the stressors and challenges to enhance resilience

According to Kucel et al. , severe infestation of up to 80% of berries occurs in Uganda and Ivory Coast, and 96% in Congo and Tanzania. In Kenya, Jaramillo et al. reported infestations ranging between 60 and 91% of berries on the plant and 44–84% of shed berries on the ground. Given the lack of control of CBB by parasitoids, cultural controls need to be developed to control CBB infestation levels as explored in a forthcoming paper.Over the last few decades, the field of nutrition has grown and evolved. Although we continue to define the critical roles that nutrients play as fuel sources, enzyme cofactors, signaling molecules, and vital infrastructure for our bodies, the cutting edge of nutrition research is pushing beyond simply meeting our bodies’ basic needs. Indeed, as the population is living longer, an emerging focus for nutrition has been on obtaining and maintaining optimal health over the lifte course. On 10 October, 2022, the Council for Responsible Nutrition held their annual Science in Session conference entitled Optimizing Health through Nutrition – Opportunities and Challenges. The audience consisted of scienThists and executives from dietary supplement and functional food companies as well as nutrition graduate student awardees of a CRN and ASN Foundation educational scholarship to attend the symposium. CRN is a trade association representing dietary supplement and functional food companies. The goals for this meeting were to propose a definition for optimal nutrition and identify strategies and tools for evaluating optimal health and nutrition outcomes while highlighting the gaps in this emerging space.

Now more than ever in history, our population’s health has emerged as a global priority. Currently, 6 in 10 adults in the United States have a chronic disease, and 4 in 10 have 2 or more. In <10 y, seedling starter pot the number of older adults is projected to increase by ~18 million. This means that by 2030, 1 in 5 Americans is projected to be 65 y old. As the major risk factor for many chronic illnesses is age, it is anticipated that the rates of all age-related diseases, especially chronic diseases, will skyrocket, potentially overwhelming the health care system. We need to enable the health care system—and the population—to be more proactive rather than reactive toward health outcomes. There is a critical need to help find solutions to optimize health across the liftespan to support living better longer, i.e., health span. Ensuring optimal nutrition is a significant and easily modifiable variable in the solution for maintaining and improving health span. We need to advance concepts beyond essential health and consider meeting the nutritional needs for optimal health. Although the nutrition science community is moving toward the vision of nutrition to support optimal health, many challenges and gaps still exist, but there are also recent advances and exciting opportunities. The goal of the CRN “Science in Session” workshop was to discuss these challenges, gaps, and opportunities in order to advance the concept of nutrition for optimal health. This review summaries these findings and discussions.The DRIs for individual nutrients, including the Estimated Average Requirement and the RDA, are lifte stage- and sex specific recommendations for Americans and Canadians. These reference intakes were established in the 1990s by the Food and Nutrition Board of the National Academies of Sciences, Engineering, and Medicine to prevent deficiency disease and to reduce the risk of chronic diseases.

However, incorporating chronic disease endpoints has been extremely challenging, primarily because data are largely lacking. Such end points were used to set the DRIs for only a handful of nutrients. Thus, the current DRIs, including the RDAs that are aimed to cover the nutrient needs of 98% of the population, do not account for the amount of a nutrient that one needs in order to achieve and maintain ‘optimal’ health.The science of resilience is not a new concept—this scientific concept was documented in the literature as early as the 1800s; the terminology enThered the biomedical sciences in the mid- 1900s and emerged in the early 2000s as a concept to be interconnected in multiple health domains. The questions dominating its broad use and applicability tend to focus on how to define resilience. In 2019, the Trans-NIH Resilience Working Group was formed with a goal to develop an NIH-wide definition of resilience and to achieve consistency and harmony on the design and reporting of resilience research studies. In 1993, an introductory manuscript to a special issue published on the science of resilience included a quote stating, “resilience is at risk for being viewed as a popularized trend that has not been verified through research and is in danger of losing credibility within the scientific community”. The authors of the manuscript also warned against definitional diversity with respect to measures of resilience and urged researchers to clearly operationalize the definition of resilience in all research reports. Remarkably, this call to action served as a primary aim of the Trans-NIH Resilience Working Group when it was organized >25 y after the 1993 special issue on resilience. One of the first activities of the Trans-NIH Resilience Working Group was to host a workshop, in March 2020, which led to the development of a definition of resilience and a conceptual infographic. The definition was intended to be applicable and useful across multiple domains, and it states that resilience encompasses “A system’s capacity to resist, recover, grow, or adapt in response to a challenge or stressor”.

A system can represent different domains, levels, and/or processes. Over time, a system’s response to a challenge might show varied degrees of reactions that likely fluctuate in response to the severity of the challenge, the length of time exposed to the challenge, and/or innate/intrinsic factors. To show applicability of the definition in resilience research studies, the Resilience Research Design Tool was later developed to help improve consistency in resilience research reports and to facilitate harmony with respect to measures of resilience outcomes. One of the goals of the resilience framework is to reframe the way we ask research questions, particularly about nutritional interventions like dietary supplements, so that we can better understand health outcomes that are not based solely on disease end points. Going forward, as researchers across various scientific domains and sectors come closer to a unified definition of resilience and perhaps agree to the use of a standard checklist for designing and reporting on resilience studies, there is greater opportunity to harmonize the science and develop more empirical evidence of resilience outcomes.Optimizing performance also includes building resilience in order to enhance the ability to perform tasks and ensuring resilience in order to prevent illness, injury, and disease. Within the US Department of Defense, researchers are able to study different models of physical and psychological stress and the application of different nutritional interventions with Service Members throughout their careers. Various models of stress are introduced, including initial military training , advanced military training courses , service academies , and extreme environments , along with examples of various interventions and outcome measures collected to date. The importance of nutrition on readiness and resilience was identified in military populations more than a decade ago and continues to be of interest. Two specific examples are provided to further explore nutrition interventions aimed at optimizing performance in the Department of Defense. The first, a completed double-blind, randomized, placebo-controlled trial, used a calcium and vitamin D fortified food product to optimize bone health during initial military training of Marine Corps recruits. Using a supplement or food intervention for calcium and vitamin D, participants received 2000-mg calcium and 1000-IU vitamin D per day. The primary outcomes of the study showed that bone markers and vitamin D status improve, but the supplementation did not affect skeletal parameters. Vitamin D also augmented markers of innate mucosal immunity. A second, forthcoming study aims to evaluate the effectiveness of adding spices and herbs to increase vegetable intake among junior-enlisted Service Members. Using a cycle of basic science/discovery that advances to clinical trials with various review steps helps move the field of nutrition science forward in a “total force fitness” approach. Total force fitness was introduced as a framework to help Service Members, their families, round nursery pots and military units reach and sustain optimal, holistic health, and performance in a way that aligns with their mission, culture, and identity. Other examples of frameworks focused on a holistic approach to research include Whole Person Health proposed by the National Center for Complementary and Integrative Health, Whole Health developed by the Department of Veterans Affairs, and a recent consensus study report by the National Academies entitled Achieving Whole Health. A focus on improving resilience as a model outcome highlights the opportunities and complexities of conducting optimal health and nutrition research in this space.As the number and proportion of older adults in the population increase, the prevalence of age-related deficits in mobility and cognition also increases. Such deficits may be because of normal aging or to pathologic processes. For instance, cognitive impairments like declines in memory and speed of processing may result from normal brain aging or neurodegenerative diseases like dementia. When considering hallmarks of optimal nutrition and health, improving resilience from cognitive decline has strong promise and impact. Although the etiology of age-related mobility and cognitive changes is multi-factorial, it is well established that vulnerability to oxidative stress and inflammation increases as we age. Strategies that target oxidative stress and inflammation may improve resilience to processes that lead to cognitive decline.

For example, a healthy diet may help combat both oxidative stress and inflammation in the body, but a diet rich in bio-active polyphenolics from fruit, vegetables, walnuts, and coffee may be especially important in improving resilience and health outcomes. Polyphenols have antioxidant and anti-inflammatory activities, so consuming them could slow or prevent age-related changes. As previously shown, foods high in polyphenols, e.g., dark-colored berry fruits, prevent age-related neuronal and behavioral deficits in animal models of aging. In particular, studies from animal models of aging have found that polyphenolic compounds from walnuts and berries hold promise in slowing—and perhaps even reversing—age-related motor and cognitive declines. These polyphenolics possess antioxidant and anti-inflammatory properties and may also influence the brain directly through various mechanisms, including alThered cell signaling and increased neurogenesis, arborization of dendrites, and autophagy in the brain. In recent randomized, double-blind, placebo-controlled pilot studies in healthy older adults , blueberry or strawberry supplementation was able to improve some aspects of cognitive performance, but not gait or balance. In a randomized, double-blind, placebo-controlled trial in 44 healthy older adults , supplementation with freeze-dried blueberry powder for 3 months improved 1 measure of executive function and 1 measure of learning and memory. In a similarly designed trial, supplementation of freeze-dried strawberry powder in 39 healthy older adults for 3 months improved 2 measures of learning and memory compared with placebo but had no effect on executive function. Both trials found that berry powder supplementation did not affect mobility, including measures of balance and gait, likely because the study subjects had no mobility detriments at baseline. Berry supplementation did not decrease serum levels of inflammatory biomarkers compared with placebo, but when serum from berry-supplemented subjects was applied directly to cultured microglia cells, there was a reduction in LPS-induced inflammatory markers relative to placebo-treated subjects. interestingly, the serum was protective when taken during fasting as well as post prandially. Although these studies are preliminary, they add to the evidence that berry supplementation may help protect against age-related cognitive declines. In addition to single nutrients, healthy dietary patterns have been shown to slow the rate of cognitive decline. In particular, the Mediterranean-DASH diet intervention for neurodegenerative delay diet, which highlights increased intake of plant-based foods, such as berries and green leafy vegetables, is associated with lower risk of cognitive impairment in older adults. Further investigations examined mechanisms and other factors involved in the beneficial effects of berry fruits. For example, changes in circulating levels of specific phenolic compounds were correlated with changes in cognition. Furthermore, cognitive performance and inflammation were related, as serum collected from berry-supplemented animals reduced LPS-induced inflammatory-stress-mediated signals in stressed highly aggressively prolifterating immortalized microglia in vitro relative to serum from placebo-fed controls, and nitrite levels following supplementation were positively correlated with cognitive performance. Therefore, the inclusion of additional servings of polyphenolic-rich foods, such as nuts and berries, in the diet may be one strategy to forestall age-related neuronal deficits, perhaps via decreases in inflammation and suppression of microglial activation, to help increase cognitive resilience and preserve cognitive function.