We also used other techniques to complement the structural information thus gained

Our study reveals the induction of channels allowing for the nuclear egress of the progeny viruses across the host chromatin. Moreover, this work used a new combination of methods in the study of virus-cell interactions.Language production appears to be a largely incremental process: speakers plan an utterance as they are producing it, simultaneously integrating multiple sources of information . One apparent effect of this incrementality is availability effects in language production: the fact that speakers will often choose to produce words which are easily accessible or available to them earlier in an utterance, or to include such words when they are optional, or even to use a highly-available word in place of a more communicatively accurate but less available word . The fact that available words tend to go earlier has been attributed to a greedy, ‘easy-first’ language production strategy . Here I present an account of availability effects within a computational-level model of language production based on a recently developing theory of the complexity of action selection from the fields of computational neuroscience and information theory. This theory, the rate–distortion theory of control , holds that actions are selected to maximize value subject to constraints on the use of information. The theory originates in the economics literature where it operationalizes bounded rationality , and it has been developed and applied in the literature on physics, robotics, optimal control,25 liter pot computational neuroscience, reinforcement learning, cognitive psychology, and linguistics .

RDC uses the mathematical theory of lossy compression to impose informational constraints on the perception–action loop. It has also been termed rational inattention and policy compression. I develop a proof-of-concept model of language production within the RDC framework, based on an informational constraint identifiable as a channel capacity limit on cognitive control . I show that this model provides an account of availability effects in language production, and I validate this account by examining experimental data from two previous sets of experiments: Levy & Jaeger on relative clause complementizers in English, and Zhan & Levy on noun classifier choice in Mandarin Chinese. In contrast with existing models of language production which are primarily situated at Marr’s algorithmic level of analysis or at more concrete levels, the RDC model is at the computational level: it directly describes the inputs, outputs, and goals of the language production system, without committing to an algorithmic implementation. The high level of abstraction means that it is possible to see how simple underlying computational constraints give rise to a variety of different behaviors. Hard disks consume a significant amount of power. In general purpose computing, hard disks can be responsible for as much as 30% of a system’s power consumption. This percentage will only increase as current CPU trends lean toward increasing the number of cores versus the single core clock rate, hard disks use faster rotational speeds, and multiple hard disks per system become more prevalent. In large storage systems, hard disks can dominate system power consumption: 86% and 71% of the total power consumption in EMC and Dell storage servers, respectively.

As a result, there are several motivations to decrease the power consumed by hard disks, from increasing battery lifetime in mobile systems to reducing financial costs associated with powering and cooling large storage systems. To reduce hard disk power consumption, spin-down algorithms are used, which put a disk in a low-power mode while it is idle. In a low-power mode, such as standby, the platter is not spinning and the heads are parked, reducing power consumption. Researchers have proposed several spin-down algorithms, which are very efficient at reducing hard disk power consumption. These algorithms are typically time-out driven, spinning down the disk if the time-out expires before a request occurs. Adaptive spin down algorithms vary the time-out value relative to request inter-arrival times. They are very effective and approach the performance of an optimal off-line algorithm which knows the inter-arrival time of disk requests a-priori. Although spin-down algorithms are effective at reducing hard disk power consumption, pathological workloads can completely negate a spin-down algorithm’s power saving benefit, prematurely causing a disk to exceed its duty cycle rating, and significantly increasing aggregate spin up latency. Such pathological workloads, which periodically write to disk, are not uncommon. Both Windows and UNIX systems exhibit such behavior. For example, Figure 1 shows the periodic disk request pattern of an idle Windows XP system. In UNIX systems, applications such as task schedulers , mail clients, and CUPS periodically write to disk. Upcoming hybrid disks will place a small amount of flash memory logically next to the rotating media, as shown in Figure 2. The first hybrid disks will either have 128MB or 256MB of NVCache in a 2.5 in form factor 1.

A host can exploit the NVCache to achieve faster random access and boot time because it has constant access time throughout its block address space as shown in Figure 3, while rotating media suffers from rotational and seek latency. Access time for this particular device is roughly equal to c+bs÷bs off, where c is a 2.2ms constant overhead, bs is the desired blocksize, and bs off is 4KB. In addition to the potential performance increase, hybrid disks can potentially yield longer spin-down durations— the NVCache can service I/O while the disk platter andarm are at rest, such as the write requests from Figure 1. Note that because flash memory is non-volatile, NVCachestored data is persistent across power loss. To exploit the underlying media characteristics of hybrid hard disks for improved power management, we present four enhancements to increase power savings, reliability, and reduce observed spin-up latency: Artificial Idle Periods that extend idle periods relative to observed I/O type; a Read-Miss Cache that stores NVCache read-miss content in the NVCache itself; Anticipatory Spin-Up that spins the rotating media up in anticipation of an I/O operation not serviceable by the NVCache; and, NVCache WriteThrottling that limits the reliability impact imposed on the NVCache because of I/O redirection.We now present an overview of a hybrid disk and how its NVCache can be managed by a host operating system using a modified set of ATA commands,25 liter plant pot according to the T13 specification for hybrid disks. The four enhancements are presented in Section 3. Sectors stored in the NVCache are either pinned or unpinned, which when referred to as a collection are known as the pinned and unpinned set, respectively. The host manages the pinned set, while the disk manages the unpinned set. Hybrid disks will also have a new power mode, NV Cache Power Mode, which can be set and unset by the host. In this mode, I/O is directed to the NVCache unpinned set while the disk “aggressively” tries to keep the rotating media spun-down. Defining and implementing “aggressive” is left to the drive vendor’s discretion. Although the hybrid disk controls the spin-down policy, the host controls the minimum time rotating media must remain spinning after a spin-up, providing the host with some control over the underlying spin-down algorithm. The host controls I/O to the NVCache pinned set. Sectors can be pinned in the NVCache, and pinned sectors can be removed or queried.

The pinned attribute feature is intended to increase random access performance, although it can also facilitate better power management. The host can flush a specific amount of unpinned content to rotating media to make room for more pinned sectors. However, pinned sectors cannot be evicted to create unpinned space. Addressing multiple sectors at a time is possible using an extents-based mechanism called LBARange Entries. A host can also specify the source when adding pinned sectors to the NVCache: host or rotating media, by setting a Populate Immediate bit. This capability gives the host control over NVCache functionality: better random access or spin-down performance. A host has additional control over a hybrid disk. It can query the disk for spin-up time, read/write NVCache throughput, and the maximum pinnable sectors.We now discuss the mechanism in which a host can leverage a hybrid disk to provide power management functionality. The host controls rotating media state with traditional power management commands, and NVCache commands to manage the pinned set. Fine-grain spin-down algorithms can be implemented because the host is informed when rotating media power state changes occur. While the rotating media is spun-down, the host should use the NVCache store, query, and read commands to redirect I/O to the NVCache. If the NVCache does not have the requested read data, or it is full before a write, the rotating media must be spun-up, the request satisfied, and NVCache content flushed to disk using both pinned set removal and traditional disk I/O commands. The host can put the disk in standby mode again when the spin-down algorithm deems it desirable to do so. We assume this method because it provides us with complete control over a hybrid disk, allowing us to implement a fine-grain adaptive spin-down algorithm and I/O subsystem enhancements to exploit a hybrid disk’s media characteristics. Alternatively, a host could rely on the NV Cache Power Mode to provide all aspects of power management. However, there are several limitations with this approach: the minimum high-power time is not dynamic, it assumes the disk controller implements the correct spin-down policy, and the NVCache may not be a suitable I/O destination for certain workloads. A host could implement its own coarse-grain spin-down algorithm, by repeatedly entering and exiting the NV Cache Power Mode, recording I/O response times to implicitly infer when the rotating media is spun-up. In this way, a host can utilize its own spin-down algorithm, but still has no control over NVCache management. Note that we omit pinned and unpinned references for the remainder of this work as we no longer refer to the unpinned set.Mean Time To Failures and Mean Time Between Failures are widely used metrics to express disk reliability. However, disk manufacturers also provide a duty cycle rating. Duty cycle rating is the number of times rotating media can be spun down before the chances of failure increase to more than 50% on drive spin-up. When controlling a disk’s power state with a spin-down algorithm, the duty cycle metric is potentially more important than either MTBF or MTTF because a spin-down algorithm results in an accelerated consumption of duty cycles. In addition to duty cycles, hybrid disks also have flash memory reliability—flash memory blocks have a rated number of erase cycles they can endure before errors are expected. Today’s hard disks generally consume 5–10 times more energy while in active than in standby mode. As a result, adaptive spin-down algorithms are very aggressive— it is more efficient to spin-down after only a few seconds of idle time. With such short idle times the number of duty cycles increases dramatically. Duty cycle terminology varies, depending on drive class and technology. Typically, 3.5 in drives refer to duty cycles as Contact Start/Stop Cycles , where the head comes to rest on a landing zone on the platter during a power-down. An alternate technology, ramp load/unload, is typically used in notebook drives, where the head comes to rest off the side of the platter. Drives using CSS technology have duty cycle ratings in the range of 50,000, while drives with ramp load/unload technology are in the range of 500,000, mostly due to reduced stiction effects. With current compact flash specifications, the number of erase operations per block is typically rated at 100,000 with 256KB sized erase blocks. A hybrid disk containing a 256MB NVCache can keep its rotating media spun-down while up to 256MB of data is written to it. With optimal wear-leveling and a write-before-erase architecture, a 256MB device can endure over 100 million erase operations before becoming unerasable. An optimal wear-leveling algorithm spreads all writes across the entire device’s physical address space while write-before erase architecture always writes data corresponding to the same LBA to an empty physical location to ensure data corruption does not occur on a bad overwrite. By exceeding the block erase rating, flash memory blocks may become unerasable, but are still readable. To a host, a hybrid disk with unerasable NVCache blocks should appear as a traditional disk.Spin-down algorithms which control the power state of traditional hard disks are efficient at reducing disk power consumption. There is little room for improvement of such algorithms, which dynamically adjust to the most power efficient time-out using machine learning techniques.

All of these works touch on the topic of Chinatown without making it the primary object of focus

Tracing the development of Chinese American Orientalism from the Chinese Village at the World Columbia Exposition in 1893 through the its presentation in China City and New Chinatown on the eve of the Second World War, I demonstrate how this counter-hegemonic discourse eventually was incorporated back into mainstream Orientalism and used to justify the needs of a diversifying nation-state.This dissertation makes important contributions to a number of areas of study including racial representations in Hollywood film, Asian American participation in the film industry, the history of California and the American West, and the sociology of race. As an interdisciplinary project produced in the Ethnic Studies Department at U.C. Berkeley, the dissertation remains in conversation with disciplines including film and media studies, U.S. history, and urban sociology. First and foremost though, this project is grounded in the political and epistemological imperatives of Asian American studies. While the field of film studies has had a robust and wide-ranging engagement with Asian cinema, film studies work on Asian Americans relationship to the Hollywood film industry has remained much more limited. Due in part to the paradigm of national cinema, it seems at times as if the field of film studies has difficulty comprehending an Asian American subjectivity outside of the lens of Diaspora. That is to say that film studies scholars are often more comfortable seeing film and media representations produced by people of Chinese descent in North American as part of cultural Diaspora grounded in East Asia,square pot than they are of seeing these works alongside those of African American, Native American and Latinx cultural producers engaged with concepts of race, difference, and social power.

Because of this, the limited scholarship on race and cinema in film studies has developed primarily through a focus on African American engagement with film, leaving work on Asian American, Native American, and Latinx film participation much less developed. Given this paradigm, it should not be surprising that the earliest scholarship on Asian Americans and film developed not out of film studies but rather out of the field of Ethnic Studies in the 1970s. At a moment when film studies was dominated by questions of psychoanalytic film theory with its focus on the cinematic apparatus and its effects of film on the subjectivity of the film spectator, Asian American activists, media makers, and academics were forging the foundations of the scholarship on Asian Americans and film. While there were no essays on film or video included in the earliest Asian American studies reader Roots published by UCLA Asian American Studies center in 1971, the follow up reader Counterpoint published in 1976 contains a section on “Communication and Mass Media” with an essay by Judy Chu on Anna May Wong.Around the same time the author Frank Chin along with members of the Combined Asian American Research began the process of interviewing Asian American actors and others who associated with the film industry.The decade also witnessed the publication of the first monograph devoted to the topic in Eugene Wong’s On Visual Media Racism. Developing out of this earliest scholarship, Asian American studies has advanced its own academic narrative on Asian American engagement with film. This scholarship begins by focusing primarily on issues of Asian American representation on screen during the silent film and classical Hollywood periods. This scholarship on Asian American representation during the silent and classical periods is supplemented by work on well-known Asian American performers such as Anna May Wong, Philip Ahn, and Sessue Hayakawa.The focus of the field then shifts to examine Asian Americans as media producers beginning in the 1970s with the advent of Asian American Asian American media collectives such as Visual Communications and Asian Cinevisions .

In this way the scholarship in Asian American studies on film moves broadly from a focus on Asian Americans as an object of the cinematic gaze in the period before 1970 and then shifts to focus on film as a medium for Asian American self-representation in the period after 1970. Work on Chinatown in the silent and classical film periods follows this trend by focusing on Chinese Americans as objects of representation. There exists a number of essays on the D.W. film Broken Blossoms, studies of Fu Manchu and Charlie Chan Films, and essays and books about Anna May Wong.There are a handful of exceptions to this, most notably work by Ruth Mayer and Bjorn A. Schmidt.Schmidt’s book examines cinematic depictions of Chinese Americans as productive forces that shaped immigration laws and policies in the period between 1910s and the 1930s. In two chapters devoted to Chinatown films, Schmidt shows first the way that Chinatown films constructed dominant conceptions of an old Chinatown as an underground site of violent crime against representations of a new Chinatown as modern and built for tourists. Bjorn then moves on to discuss the ways that many silent Chinatown films replicated the tourist gaze of the Chinatown tour. Mayer in her essay on Chinatown films demonstrates the importance of the curio store to silent cinematic representations of Chinatown during a moment when consumer culture in the United States was both consolidating and diversifying. This dissertation contributes to and departs from this recent scholarship in that it shifts the focus away from the ways that film represented Chinatowns and instead focuses on the Chinatown residents as cultural producers. While drawing heavily on scholarship within film studies on Asian American representations and stars, this project foregrounds the way members of the ethnic enclave utilized Chinatown as a medium of cultural production. Los Angeles Chinatown’s proximity to the film industries magnified the opportunities for local Chinese Americans to utilize Chinatown to mediate dominant ideas of race, gender, and nation, but the film industry did not create these opportunities. Chinese American merchants in New York and San Francisco beginning in the late nineteenth and early twentieth centuries began using Chinatown as a medium of cultural production to advance their own depictions of Chinese people.

The rising popularity of film as one of the most popular forms of leisure ensured that by the 1930s, Chinese Americans in Los Angeles possessed a greater ability to shape the national idea of Chinatown than Chinese Americans in New York and San Francisco. Given this focus on the development of race and gender as social categories within the United States, this dissertation is also in conversation with the literature within the field of sociology on Chinatowns as ethnic enclaves. Whereas the topic of Asian American engagement with film has remained somewhat marginal to film studies,drainage collection pot the topic of Chinatown was central not only to the development of sociological theories of ethnic enclaves in the first half of the twentieth century but more broadly to the development of the entire field of urban sociology around the same period. Many early sociological studies on Chinese Americans were influenced by the work by Robert Park and his Chicago School of urban sociology and his work from the first half of the twentieth century. Park argued in his race relations cycle that when two ethnic or racial groups come in contact with one another these groups go through a four-stage cycle of contact, conflict, accommodation, and eventually assimilation.This and other ideas within the Chicago School of sociology were deeply rooted in notions of human ecology, which is the study of the ways humans relate to one another and to their environment. Park believed that human life was divided into two levels the biotic and the cultural and that social organizations of cities were a direct result of the competition for resources.Focusing on human biology as a basis for difference, scholars in the Chicago School largely rejected earlier continental thinkers like Max Weber, Karl Marx, and George Simmel, who saw the larger social and economic forces of capitalism as being fundamental to understanding human interaction.As such, these early sociologists were not interested in offering a systemic critique of American nationalism, racism, or empire, nor were they concerned in any but the most marginal ways with determining how these and other forms of power structured the lives of Chinese Americans. Rather sociologists studying Chinese Americans influenced by the Chicago School asked a much less critical set of questions about the extent to which Chinatowns facilitated the assimilation of Chinese Americans into US society.The earliest scholarship on Los Angeles Chinatown developed out of this framework and was produced by a handful of Chinese American graduate students in the Sociology Department at the University of Southern California between the 1930s and 1950s. Master’s theses by Kit King Louis, Mabel Sam Lee, and Kim Fong Tom as well as a doctoral dissertation by Wen-hui Chen all addressed issues of Chinese American assimilation and generational differences in Chinese American ethnic enclave in Los Angeles. In addition to these studies in sociology, Master’s theses by Charles Ferguson in Political Science at UCLA , Edwin Bingham in History at Occidental College , and by Shan Wu in the business school at USC from this same period represent some of the earliest scholarship on Chinese Americans in Los Angeles.

While the model advanced by Park is no longer the central lens used by urban sociologists, the way that scholars in this tradition define Chinatown has remained surprisingly similar to this earlier generation of ethnic enclave scholars. Scholars in sociology continue to use the term Chinatown to mean Chinese American ethnic enclave and in the process these sociologists foreground ties of ethnicity and culture over ties of place and geography. For example in 1992 sociologist Min Zhou wrote, “I treat Chinatown as an economic enclave embedded in the very nature of the community’s social structure offering a positive alternative to immigrant incorporation.”She goes on to explain that this enclave “is not so much a geographical concept as an organizational one.”Zhou is clear that this economic enclave must be distinguished from an ethnic neighborhood. While most of the businesses in Zhou’s enclave are concentrated in Manhattan’s Chinatown, many are situated elsewhere. Using this definition, she further excludes from her study non-Chinese owned businesses that are based in Chinatown.Peter Kwong was a scholar who was openly critical of many of the arguments advanced by Min Zhou, and yet he nonetheless worked from a similar definition of Chinatown as an ethnic enclave tied together by social and economic relationships.Thus one of the Chicago School’s most long lasting influences on the study of Chinese Americans may be a definition of Chinatown as an ethnic enclave, loosely connected by place and bound primarily by social and ethnic ties. At it’s best this ethnic enclave literature remind us that Chinatowns are not homogenous but rather socially stratified collections of individuals, institutions, and organizations. Works in this ethnic enclave tradition like Judy Yung’s Unbound Feet along with the Chinese Historical Society of Southern California’s Linking our Lives focus on how gender stratifies and influences the lives of women in San Francisco and Los Angeles Chinatown respectively.Other scholars have taken a more global approach. Work by Peter Kwong foregrounds nationality as opposed to race while discussing divisions of class in New York Chinatown. Jan Lin’s Reconstructing Chinatown shows how global capital interacted with national, and local forces to shape the nature of Chinatown. Regardless of whether these scholars focus primarily on the stratification within the ethnic community in a way that is US-centric or on stratification within the ethnic community in a way that links the global, national, and the local, these and other works in the ethnic enclave tradition remind us that power structures Chinese ethnic enclaves just as it structures the rest of society. While there have not been many recent academic studies that look at Los Angeles Chinatown in the first half of the twentieth century, key historical studies have focused on the Los Angeles Plaza and the other areas that make up the core of Los Angeles. This project builds on this growing literature on the multiethnic history of Los Angeles. As part of his broader exploration of the Los Angeles Plaza, William Estrada looks at the development of China City in relationship to Olvera Street contrasting Christine Sterling’s roles in the two projects.Mark Wild looks at Chinese as one group that lived in what he calls the central districts of Los Angeles in the first three decades of the twentieth century.

The levels and numbers of supervisors varied by institution and by clinical division

An unsuccessful account can provide valuable lessons, as can a tale of success. Whether the research provides understanding of successful implementation or catastrophic failure, a strong analytical exposition of the details germane to the outcome provides opportunities to learn and establish best practices. This research seeks to understand what is required for successful implementation of a program in a public setting undergoing substantial organizational change. Chapter 2 assessed the program implementation in terms of changes in the program to fit the environment. It in essence focused on the technical details of the implementation—the most difficult aspect of implementation, suggest Pressman and Wildasky and Hupe . This chapter will provide insight into what is suggested here as the other major technical detail of implementation: the managerial capacity to perform. The capacity to change is important to the implementation literature at two primary levels: managerial and organizational. The organizational context represents a macro viewpoint, whereas the managerial level is a more micro, detailed look at an entity’s capabilities. These studies review adaptation to internal or external changes with respect to the enterprise level’s resources or ability to flex and perform around the changes. Klarner, Probst, and Soparnot examined the World Health Organization and its organizational level change capacity. Theirs was a unique look into a public-sector example of change capacity because it provided a look at the organizational context,vertical plant tower the change process, and how the organization mobilized around lessons learned from change experiences. The authors concluded that analysis of an organization’s capacity for change better equips that entity to deal with planned change, thereby increasing chances for success .

Building this type of capacity generally requires a focus on three organizational processes: a learning-based culture, general support for change activities across the enterprise, and the change effort itself . An organization’s capacity for change is a direct function of available resources and its managerial adaptability. The managerial capacity for change relates to the ability of the administrative layer to perform and produce successful outcomes. Managerial capacity is the focus of this chapter because it helps to explain the overarching question guiding this work: how can public-sector management overcome institutional-level forces and implement a complex program successfully?When implementing major reform efforts around health care delivery in the public sector, a manager’s capability to act represents a lever for success . A more detailed treatment of what managerial capacity means in this dissertation’s framework is presented later in this chapter. Within the literature, however, a general association is drawn between managerial capacity and administrative flexibility, autonomy, and choice in actions . This association supports the argument that health care reform efforts, such as the one studied in this paper, require an administrative layer that is able to act in a manner not typically associated with a staid, bureaucratic internal environment. Correctional organizations are very bureaucratic , and the California prison system is no exception to this . This became a significant roadblock for the managers in the receivership under examination. An interesting organizational feature of the California prison health system is that it employs its clinical staff and owns its primary care facilities. This model of owning resources rather than contracting out has implications for the nature of management behavior within the organization, and for how intervention programs can be planned and executed within this setting. For prisons located in geographical areas where it is difficult to recruit clinical professionals, CDCR contracts for outside specialty services and acute care on a fee-for-service basis .

As both a purchaser and provider of health care services, the state’s prisons system has complex organizational processes that required the coordination of activities and multiple types of personnel. Prior to the receivership, the breakdown and lack of attention to the coordination of health care activities in the correctional setting led to a degradation of services and negative health outcomes for prisoners. Managers lacked administrative flexibility in their actions and additionally lacked the ability to staff positions over the long term in geographically undesirable areas. The managers in this setting were assigned the task of implementing a series of projects that were distinct parts of a central program of health care delivery reform. Implementation programs themselves serve as the change vehicles for organizations in that they adapt to situations or environmental challenges. The catalysts for change were discussed in the previous chapter, and these catalysts are the starting points for reform. The vision for change is then memorialized as tasks within a project plan, and typically it is the aggregation of related projects that constitute a program. Put another way, a program to be implemented may be dissected into its distinct parts, which are called projects. This chapter seeks to provide an understanding of how managerial capacity is controlled by organizational structure that is guided by project-level structure. The previous chapter used program-level analysis to focus on implementation theory. It provided a methodology that relied on developing program elements in a way that integrates with prevailing institutionalized processes. The underlying theory was that this type of approach would lead to successful program implementation. It relied on looking at program-level variables rather than at the organizational level of analysis. This chapter continues the theme of focusing on program-level variables, this time looking at management and their behavior. It provides an understanding of what managers can be taught to focus on during change inducing processes.

Adapting models having their origin in the not-for-profit sector is not typically done in the public, correctional setting. The application of private-sector tools is a more familiar strategy, and even these require significant adaptation to maximize their understanding of a given situation . Diffusion of innovation tends to be a more successful method for applying private-sector operational strategies within the public agency setting . Similar large-scale attempts at adopting non-public-sector program models for deployment within the vastly different structure of public works have resulted in failure . The public sector is characterized as having a highly bureaucratic organizational structure, being inflexible to change, and behaving in an extremely routinized manner . Within the private-sector setting there exist a different set of rules, structure of accountability, and goals to achieve, as compared with the traditional public sector. As such, difficulty in adapting innovations established in one sector to another is expected to occur. Both the internal and external environments were diametrically opposed to the studied health care reform program,indoor vertical farming and only an external regulator insisted on its use and success. Adding to the complexity of program and environment were the challenges related to the strategy and technical details of the implementation. Previously, those details had only been addressed in private-sector settings, and therefore the nuances related to the public sector were not known. Outside the private sector, issues related to government-level political support is an important factor for public managers, especially under reform programs . The extent to which administrators perceive support has a significant influence on managerial and employee behavior . These external concerns differ significantly from those typically faced in nonpublic sectors. Due to changes in political administrations, many decisions faced by public-sector managers related to the organizational structure are questioned, in order to maintain or bolster performance . The comparative difference between how often, or the degree to which, these environmental challenges make administration difficult between sectors is not well described in the literature. What is clear, however, is that differences in routine administrative life exist between sectors, just as the types of obstacles faced tend to differ . Program implementations that require the establishment of collaborative, cross functional work groups develop their own policies and rules to guide individual and administrative behavior. These rules are defined within project-group-level cultures that form to define the norms of behavior, enabling the groups to work efficiently . According to Schein , this is expressed through development of proprietary languages and parameters of acceptable group behavior. The internal environment of the projects established by the receivership was not exempt from the development of new cultures within the various project groups. The agency under receivership, CDCR, had its own highly institutionalized processes and well-defined set of cultures that had been established at agency inception and evolved over decades. Its structure and operational framework defined both the ends and means under which administrative actions were determined and undertaken .The receivership organization was a much younger entity, with staff at both the management and worker levels less cohesive and culturally structured than CDCR. As a whole, the staff from CDCR was longer tenured within that agency and therefore had well defined social network channels and routinized behavior—in sharp contrast to the newly established Receivership organization.

The programs the receivership implemented, and specifically the CCM program, involved both CDCR and the receivership entities in terms of personnel, resources, time, and communications. This required integration involved establishing cross-functional teams from both organizations to carry out the work. Studying the administrative behavior, then, of both entities at the organizational level, as suggested by the institutional-school approach, may be overly complex and likely entirely inaccurate. Workers had their home organizations in either CDCR or the receivership . Managers were connected across the receivership enterprise due to program-level work that integrated departments. These managers had their performance evaluated at the program level, not at the organizational level. This meant that accolades or retribution were the outcome of performance of the manager’s unit for each project within which they were involved. Their performance was tightly integrated with the output and deliverables produced by sister departments to which they were tied on a particular project. The headquarters structure and its nature of accountability differed from management at the prison-facility level. Within the prisons, managers were evaluated based on their areas’ performance, not overall statewide performance. Whereas successful delivery of project tasks was the headquarters’ focus, inmate-patient health care outcomes and the passing of regulatory audits were the prison manager’s focus. The receiver-level projects were designed to ultimately lead up to the improvement of inmate-patient health and regulatory audit results. Indirectly, the managers at both levels had their missions tied together, but they were separated by a temporal gap. This difference in focus changes the method by which we can understand administrative behavior at both organizations and how work was viewed and approached by these managers. The managers participating in this study primarily held clinical professional managerial posts, such as chief of pharmacy, because they were clinical specialists. Within the receivership, the administrators were bureaucrats and had statewide responsibility, holding the highest-level positions in the department. Job titles for the highest-level administrators within CPHCS often mirrored the titles for their direct reports at the prison level. For example, the highest administrator of the nursing division within CPHCS was titled the statewide chief nurse executive. Each prison also had a classification for its head of nursing, entitled a chief nurse executive . The use of the designation statewide showed the difference in authority level and represented the matrixed or indirect relationship of the CPHCS administrator to CDCR highest-level manager. The prison-level clinicians at the non-management level in the departments of nursing, mental health, dental, pharmacy, medical, and ancillary services all reported to a chief- or director-level individual within the prison. For example, a staff psychiatrist reported to a chief psychiatrist . Below the chief/director level was an intermediate layer of supervisor staff. Nursing and mental health, for example, required far more labor resources than did dental, and therefore the levels of supervisors were greater in these former two divisions. For example, an institution may have had 100 nurses on staff and therefore required three levels of supervisory staff. Each clinical area, with the exception of pharmacy, required significant nursing staff, and therefore this division ultimately had the greatest number of staff throughout the prisons. Division of labor in this group was great and, administratively speaking, the layers of supervisory staff that developed over time within CDCR were commensurate with the highly specialized and large workload carried by the division. Staff-level workers were licensed vocational nurses or registered nurses who were managed by supervisor registered nurses . The SRNs had three levels of successive importance: I, II, and III. Each step up in supervisory level within nursing represented a significant advance within the administrative hierarchy, with both salary grade and workload accountability increasing accordingly.

Is there epidemiological evidence that BCG vaccination could be neuroprotective?

Usually, MPTP treatment causes an increase in the number of nigral microglia, which may be due to resident microglia replication, or the influx of bone-marrow-derived cells from the periphery. The increased number of activated microglia in the SNc is thought to contribute to MPTP-induced nigrostriatal system damage . Consistent with those reports, we observed that MPTP-treated mice displayed a greater than 2-fold increase in the number of Iba1+ cells in their SNc. In contrast, mice that were treated with BCG prior to MPTP treatment had a similar number of SNC Iba1+ cells to that in saline-treated control mice. In addition, in BCG treated mice, the nigra microglia had small cell bodies and long ramified processes, indicating a resting state. Such microglia are thought to exert neurosupportive functions by their abilities to produce neurotrophins and eliminate excitotoxins. These observations parallel previous assessments of microglia in MPTP-treated mice that received an adoptive transfer of spleen cells from CopaxoneH/CFA vaccinated mice. However, our results show that peripheral BCG-induced immune responses are sufficient to almost completely inhibit the MPTP-induced increase in activated microglia number in the SNc. Conceivably, by circumventing the MPTP-induced increase in activated microglia and the accompanying proinflammatory milieu, the surviving dopaminergic neurons were better able to recover function in BCG-treated mice. Further studies will be necessary to establish how the marked alterations in microglia morphology and activation affect long-term nigrostriatal dopamine system integrity. Proposed mechanisms for neuroprotective vaccines have been contradictory in regard to whether Th1, Th2, Th3 and/or Treg cells play beneficial or pathogenic roles. Some of these differences may be due to the different disease models studied.

Focusing on studies of immune-mediated protection in the MPTP mouse PD model,vertical farm system recent studies have pointed to CD4+ T cells as playing a key role in neurodegeneration. Th17 cells recognizing nitrated a-synuclein can exacerbate MPTP-induced neuronal cell loss, but can be held in check by Tregs . The immune responses elicited by CFA and BCG have been extensively studied and they are both potent inducers of IFNc-secreting Th1-type CD4+ T cells and activators of antigen presenting cells.IFNc is known to antagonize the development of Th17 cells and can induce apoptosis of self-reactive T cells. Additionally, BCG or Mycobacterium tuberculosis infection induces Tregs that proliferate and accumulate at sites of infection, which contribute to limiting inflammatory responses and tissue damage during infection. Accordingly, the Th1 and Treg responses may have suppressed the priming and expansion of Teffector cells following MPTP treatment. It is possible that the robust T cell responses to BCG also created greater T cell competition for APC that reduced the priming of Teffector cells in the periphery. Another possible protective mechanism is that the active BCG infection in the periphery diverted Teffectors, macrophages and BMDC microglia precursors from entering the CNS after MPTP treatment. Previous studies in the experimental autoimmune encephalomyelitis model have shown that infection with BCG 6 weeks before the induction of EAE diverts activated myelin-reactive CD4+ T cells from the CNS to granulomas in the spleen and liver. This diversion was not due to cross-reactivity between BCG antigens and encephalitogenic proteins. Evidently, the peripheral inflammatory lesions non-specifically attracted Teffectors that blunted the development of EAE. Interestingly, in clinical trials, MS patients immunized with BCG had a 57% reduction of lesions as measured by MRI. Thus, there is some clinical evidence that BCG treatment can suppress a neurodegenerative autoimmune response.

Based on our observations that MPTP did not increase microglia number and that microglia were in a resting state in the nigra of BCG-vaccinated mice, it is possible that the BCG treatment circumvented the activation and replication of resident microglia, diverted macrophage or BMDC microglia precursors from entering the CNS, and/or induced some efflux of macrophage-type cells to the periphery. Another possible protective mechanism is that the long period during which the attenuated BCG slowly replicates in the host causes a long-term increase in the levels of circulating immune factors , many of which can enter the CNS. These immune factors may have limited microglia activation and proliferation, the influx of peripheral macrophages or microglia precursor cells, or had a supportive effect on neurons in the area of injury. Further testing is required to distinguish among these possibilities. There are additional lines of evidence that peripheral immune responses can modulate the CNS milieu. Many studies have shown that treatment of pregnant rodents with immuno-stimulants such as lipopolysaccharide, polycytidylic acid, turpentine or viral infection, cause the offspring to have behavioral abnormalities . It is thought that the maternal immune responses to these treatments can alter neurodevelopment in the fetus. These studies provide further evidence that peripheral immune responses can modulate the CNS milieu independently of CNS-reactive T cells. While the exact mechanisms of BCG neuroprotection in the MPTP mouse model remain to be elucidated, our results suggest that peripheral BCG-induced immune responses can exert neuroprotective effects independent of CNS antigen-specify. This represents a paradigm shift from the current notion that neuroprotective vaccines work by inducing protective T cell autoimmunity that acts locally in damaged areas in the CNS. It will be of interest to transfuse GFP-marked BMDC and T cells into mice prior to BCG and MPTP treatments in order to further study BCG’s protective mechanisms.

BCG vaccinations were discontinued in the USA in the 1950s largely because of the low incidence of TB and the vaccine’s incomplete protection. However, BCG vaccination is still given to infants and children in many countries. Adults who were BCG vaccinated as children have little/no protection from TB. Because BCG vaccine effects have greatly diminished by middle age, we would not expect to find a relationship between childhood BCG vaccination and PD incidence. Moreover, the BCG vaccine-mediated protection from TB relies on a small population of memory T cells that is quiescent and that only expands after re-exposure to TB. Since PD patients are not normally exposed to active TB, their few BCG-reactive memory T cells should be quiescent and would not be a source of neuroprotective factors. While neuroprotective vaccines cannot correct basic intrinsic neuronal deficits,vertical indoor farming they may alter the CNS environment to be more neurosupportive so that neurodegeneration and secondary damage to neurons progresses at a slower rate. Conceivably, BCG induced neuroprotective immune responses will be more beneficial in a slowly progressing disease, as in human PD, than in the acutely neurotoxic MPTP model we have studied. In summary, our data show that BCG vaccination, which is safe for human use, can preserve striatal dopaminergic markers. This strongly supports the notion that peripheral immune responses can be benifical in neuropathological conditions. Second generation recombinant BCG vaccines, which have greater immunogenicity and are expected to elicit enhanced immunity against TB, are now being tested in clinical trials. Some new recombinant BCG strains express a human cytokine to boost desired immune responses. It will be of interest to test whether different recombinant BCG strains can enhance the vaccination’s neuroprotective effects. Further studies of how peripheral immune system responses can modulate neurons and glia in the CNS may provide new therapeutic strategies to safely slow neurodegenerative disease processes.This study focuses on the California Department of Corrections and Rehabilitation , which is responsible for the care and custody of incarcerated individuals in the state of California who have been sentenced to terms greater than one year. Individuals sentenced to terms less than a year, or those awaiting sentence, are under the care of different entities: the county or regional jails. In contrast to the state’s other custodial systems, CDCR is distinguished by its long-term focus on the care and control of individuals. This impacts the development of policy that promotes structural permanence. A policy focus of this type presents a significant obstacle for change management and process management, two key elements required for program-implementation success that was required within CDCR. The agency was brought under federal receivership to improve health care outcomes for prisoners; a change that enabled the federal courts to demand the implementation of health programs aimed at improving health care outcomes.

The prison system in California consists of 33 separate facilities serving over 175,000 inmates in a system designed for no more than 100,000 inmates. Due to the “three-strikes rule,” a law that requires third-time felons be sentenced to prison terms, many of the state’s 33 correctional facilities were operating at over 200% of designed capacity. Additionally, by the end of calendar year 2008, the average age of prisoners was 37 years old. This represents an increase of 37% in the average age over a 28-year time span: in 1980 the average age of the incarcerated was 27. Overcrowding in the system, combined with upwardly spiraling costs, led to organizational failure. Inmates typically have more health issues than do those in the non-incarcerated population. An examination of de-identified CDCR data reveals that approximately 70% of the inmate population was taking at least one medication in the year 2009. The average for the U.S. population is closer to 47% . Aging inmates cost two to three times as much to incarcerate as younger prisoners—on average $98,000 to $138,000 a year . When inmates are paroled, they do not receive the same access to health care as they do while imprisoned. In the state of California, inmates had a 63.7 percent three-year recidivism rate as measured in fiscal year 2012 . While under custodial care, health care is free. Individuals reentering the prison system with medical conditions that were not treated while paroled may exhibit exacerbation in their medical conditions. The costs related to the treatment of individuals with more severe conditions are higher than they otherwise would have been if the individuals had received continuous care. In the absence of proactive treatments, and with an aging population in an overcrowded and unsafe environment, the costs associated with health care are likely to continue to rise among these wards of the state. The public health concerns go beyond the cost-of-care issue and are related to high recidivism rates and community health issues, including the spread of communicable diseases such as AIDS . Some of the most prominent failures within the CDCR system were avoidable inmate patient deaths, believed to have resulted from poor systems and controls related to the delivery of health care. A receivership was established as the result of a federal class-action suit, Plata v. Schwarzenegger , under which it was found that CDCR was deficient in providing constitutionally acceptable levels of medical care to prison inmates.Several federal court cases concerning unconstitutional conditions within the system preceded the institution of this receivership . Under Plata v. Schwarzenegger , it was found that, on average, one inmate-patient died every six to seven days as the result of deficiencies in the state prison’s health care system. The receiver was given all powers vested by law in the Secretary of the California Department of Corrections and Rehabilitation, including the administration, control, management, operation, and financing of the California prisons’ medical health care system. Thus the court placed full accountability for inmate health care in the hands of the receiver , giving the ability and responsibility to change the system according to court requirements. The receiver recruited a diverse team of industry experts consisting of medical, nursing, clinical quality, information technology, and facility construction professionals to assist with the prison health care reform efforts.CDCR is presently the second-largest law enforcement department in the nation and is the single largest state-run prison system in the United States . Over the past decade, this corrections agency has grown from the state of California’s third largest employer to the second, behind only the state’s University of California system . For fiscal year 2011, $9.5 billion was budgeted by CDCR in order to supervise and oversee over 300,000 of the state’s criminals found guilty in a court of law . This size and structure relates to the common perception of big-government bureaucracy. Large, bureaucratic organizations are unwieldy and difficult to change. Max Weber pointed out, “once it is fully established, bureaucracy is among those social structures which are the hardest to destroy” . This is true due to bureaucracy’s cohesiveness and discipline, its control of the facts, and its single-minded concentration on the maintenance of power.

Utilizing TLCs may result in greater clinical flexibility and effectiveness and less role strain

The kinase activity of both CK1δ and CK1ε is inhibited by autophosphorylation of an intrinsically disordered inhibitory tail that follows the kinase domain to set these isoforms apart from other members of the CK1 family. Because the full-length kinase autophosphorylates and slowly inactivates itself in vitro, most biochemical studies exploring the activity of CK1δ/ε on clock proteins utilize the truncated, constitutively active protein, although new studies are finally beginning to explore the consequences of autophosphorylation in more detail. However, not much is known yet about how the phosphorylated tail interacts with the kinase domain to inhibit its activity; several autophosphorylation sites were previously identified on CK1ε at S323, T325, T334, T337, S368, S405, S407 and S408 using limited proteolysis and phosphatase treatment or through Ser/Thr to Ala substitutions in vitro, although it is currently not known which of these sites are important for kinase regulation of the clock. One potential interface has been mapped between the kinase domain and autoinhibitory tail through cross linking and mass spectrometry to suggest that the tail might dock some phosphorylated Ser/Thr residues close to the anion binding sites near the active site. This study also provided evidence that the tail may be able to regulate substrate binding, and therefore control specificity of the kinase,vertical plant rack by comparing the activity of CK1α, a tailless kinase, with CK1ε on two substrates, PER2 and Disheveled. Understanding the role of tail autophosphorylation and its regulation of kinase activity is sure to shed light on control of circadian rhythms by CK1δ/ε. Some sites within the C-terminal tail of CK1δ and/or CK1ε are known to be phosphorylated by other kinases, such as AMPK, PKA, Chk1, PKCα, and cyclindependent kinases.

PKA phosphorylates S370 in CK1δ to reduce its kinase activity; consistent with this, mutation of S370 to alanine increases CK1-dependent ectopic dorsal axis formation in Xenopus laevis. Chk1 and PKCα also reduce CK1δ kinase activity through phosphorylation of overlapping sites at S328, T329, S331, S370, and T397 in the tail of rat CK1δ. Phosphorylation of CK1δ T347 influences its activity on PER2 in cells, and was found to be phosphorylated by proline-directed cyclin-dependent kinases rather than autophosphorylation. CDK2 was also found to reduce the activity of rat CK1δ in vitro through phosphorylation of additional sites at T329, S331, T344, S356, S361, and T397. Unlike the other kinases listed here, phosphorylation of S389 on CK1ε by AMPK increases the apparent kinase activity on the PER2 phosphodegron in cells; consequently, activation of AMPK with metformin increased the degradation of PER2. Therefore, the phosphorylation of CK1δ and/or CK1ε tails by these other kinases therefore has the potential to link its regulation of PER2 and the circadian clock to metabolism, DNA damage response, and the cell cycle. There is now strong evidence that the C-terminus of CK1δ plays a direct role in regulation of circadian period. Recently, tissue-specific methylation of CK1δ was shown to regulate alternative splicing of the kinase into two unique isoforms, δ1 and δ2, that differ only by the extreme C-terminal 15 residues. Remarkably, expression of the canonical δ1 isoform decreases PER2 half-life and circadian period, while the slightly shorter δ2 isoform increases PER2 half-life and circadian period.Further biochemical studies revealed that these two variants exhibit differential activity on the stabilizing priming site of the PER2 FASP region––the δ1 isoform has a lower activity than δ2, which also closely resembles the C-terminus of the ε isoform.

These data suggest that a very short region at the C-terminal end of the tail could play a major role in regulation of CK1δ and the PER2 phosphoswitch to control circadian period. This is bolstered by the discovery of a missense mutation in the same region of the CK1ε tail at S408N in humans that has been associated with protection from Delayed Sleep Phase Syndrome and Non-24-hr Sleep-Wake Syndrome. Further studies will help to reveal biochemical mechanisms behind regulation of kinase activity and substrate selectivity by the C-terminal tail of CK1δ and CK1ε to determine how they play into regulation of circadian rhythms. The central thesis of this article is very simple: Health professionals have significantly underestimated the importance of lifestyle for mental health. More specifically, mental health professionals have underestimated the importance of unhealthy lifestyle factors in contributing to multiple psychopathologies, as well as the importance of healthy lifestyles for treating multiple psychopathologies, for fostering psychological and social well-being, and for preserving and optimizing cognitive capacities and neural functions. Greater awareness of lifestyle factors offers major advantages, yet few health professionals are likely to master the multiple burgeoning literatures. This article therefore reviews research on the effects and effectiveness of eight major therapeutic lifestyle changes ; the principles, advantages, and challenges involved in implementing them; the factors hindering their use; and the many implications of contemporary lifestyles for both individuals and society. Lifestyle factors can be potent in determining both physical and mental health. In modern affluent societies, the diseases exacting the greatest mortality and morbidity— such as cardiovascular disorders, obesity, diabetes, and cancer—are now strongly determined by lifestyle. Differences in just four lifestyle factors—smoking, physical activity, alcohol intake, and diet— exert a major impact on mortality, and “even small differences in lifestyle can make a major difference in health status” .

TLCs can be potent. They can ameliorate prostate cancer, reverse coronary arteriosclerosis, and be as effective as psychotherapy or medication for treating some depressive disorders . Consequently, there is growing awareness that contemporary medicine needs to focus on lifestyle changes for primary prevention, for secondary intervention, and to empower patients’ self-management of their own health. Mental health professionals and their patients have much to gain from similar shifts. Yet TLCs are insufficiently appreciated, taught, or utilized. In fact, in some ways, mental health professionals have moved away from effective lifestyle interventions. Economic and institutional pressures are pushing therapists of all persuasions toward briefer, more stylized interventions. Psychiatrists in particular are being pressured to offer less psychotherapy, prescribe more drugs, and focus on 15-minute “med checks,” a pressure that psychologists who obtain prescription privileges will doubtless also face . As a result, patients suffer from inattention to complex psychodynamic and social factors, and therapists can suffer painful cognitive dissonance and role strain when they shortchange patients who need more than what is allowed by mandated brief treatments . A further cost of current therapeutic trends is the underestimation and underutilization of lifestyle treatments despite considerable evidence of their effectiveness. In fact, the need for lifestyle treatments is growing,growing vegetables in vertical pvc pipe because unhealthy behaviors such as overeating and lack of exercise are increasing to such an extent that the World Health Organization warned that “an escalating global epidemic of overweight and obesity— ‘globesity’—is taking over many parts of the world” and exacting enormous medical, psychological, social, and economic costs.Lifestyle changes can offer significant therapeutic advantages for patients, therapists, and societies. First, TLCs can be both effective and cost-effective, and some—such as exercise for depression and the use of fish oils to prevent psychosis in high-risk youth—may be as effective as pharmacotherapy or psychotherapy . TLCs can be used alone or adjunctively and are often accessible and affordable; many can be introduced quickly, sometimes even in the first session . TLCs have few negatives. Unlike both psychotherapy and pharmacotherapy, they are free of stigma and can even confer social benefits and social esteem . In addition, they have fewer side effects and complications than medications .

TLCs offer significant secondary benefits to patients, such as improvements in physical health, self-esteem, and quality of life . Furthermore, some TLCs—for example, exercise, diet, and meditation—may also be neuroprotective and reduce the risk of subsequent age-related cognitive losses and corresponding neural shrinkage . Many TLCs—such as meditation, relaxation, recreation, and time in nature—are enjoyable and may therefore become healthy self-sustaining habits . Many TLCs not only reduce psychopathology but can also enhance health and well-being. For example, meditation can be therapeutic for multiple psychological and psychosomatic disorders . Yet it can also enhance psychological well-being and maturity in normal populations and can be used to cultivate qualities that are of particular value to clinicians, such as calmness, empathy, and self-actualization . Knowledge of TLCs can benefit clinicians in several ways. It will be particularly interesting to see the extent to which clinicians exposed to information about TLCs adopt healthier lifestyles themselves and, if so, how adopting them affects their professional practice, because there is already evidence that therapists with healthy lifestyles are more likely to suggest lifestyle changes to their patients . There are also entrepreneurial opportunities. Clinics are needed that offer systematic lifestyle programs for mental health that are similar to current programs for reversing coronary artery disease . For societies, TLCs may offer significant community and economic advantages. Economic benefits can accrue from reducing the costs of lifestyle-related disorders such as obesity, which alone accounts for over $100 billion in costs in the United States each year . Community benefits can occur both directly through enhanced personal relationships and service and indirectly through social networks. Recent research demonstrates that healthy behaviors and happiness can spread extensively through social networks, even through three degrees of separation to, for example, the friends of one’s friends’ friends . Encouraging TLCs in patients may therefore inspire similar healthy behaviors and greater well-being in their families, friends, and co-workers and thereby have far-reaching multiplier effects . These effects offer novel evidence for the public health benefits of mental health interventions in general and of TLCs in particular. So what lifestyle changes warrant consideration? Considerable research and clinical evidence support the following eight TLCs: exercise, nutrition and diet, time in nature, relationships, recreation, relaxation and stress management, religious and spiritual involvement, and contribution and service to others.Exercise offers physical benefits that extend over multiple body systems. It reduces the risk of multiple disorders, including cancer, and is therapeutic for physical disorders ranging from cardiovascular diseases to diabetes to prostate cancer . Exercise is also, as the Harvard Mental Health Letterconcluded, “a healthful, inexpensive, and insufficiently used treatment for a variety of psychiatric disorders.” As with physical effects, exercise offers both preventive and therapeutic psychological benefits. In terms of prevention, both cross-sectional and prospective studies show that exercise can reduce the risk of depression as well as neurodegenerative disorders such as age-related cognitive decline, Alzheimer’s disease, and Parkinson’s disease . In terms of therapeutic benefits, responsive disorders include depression, anxiety, eating, addictive, and body dysmorphic disorders. Exercise also reduces chronic pain, age-related cognitive decline, the severity of Alzheimer’s disease, and some symptoms of schizophrenia . The most studied disorder in relation to exercise to date is mild to moderate depression. Cross-sectional, prospective, and meta-analytic studies suggest that exercise is both preventive and therapeutic, and in terms of therapeutic benefits it compares favorably with pharmacotherapy and psychotherapy . Both aerobic exercise and nonaerobic weight training are effective for both short-term interventions and long-term maintenance, and there appears to be a dose–response relationship, with higher intensity workouts being more effective. Exercise is a valuable adjunct to pharmacotherapy, and special populations such as postpartum mothers, the elderly, and perhaps children appear to benefit . Possible mediating factors that contribute to these antidepressant effects span physiological, psychological, and neural domains. Proposed physiological mediators include changes in serotonin metabolism, improved sleep, as well as endorphin release and consequent “runner’s high” . Psychological factors include enhanced self-efficacy and self esteem, interruption of negative thoughts and rumination , and perhaps the breakdown of muscular armor, the chronic psychosomatic muscle tension patterns that express emotional conflicts and are a focus of somatic therapies . Neural factors are especially intriguing. Exercise increases brain volume , vascularization, blood flow, and functional measures . Animal studies suggest that exercise-induced changes in the hippocampus include increased neuronogenesis, synaptogenesis, neuronal preservation, interneuronal connections, and BDNF . Given these neural effects, it is not surprising that exercise can also confer significant cognitive benefits . These range from enhancing academic performance in youth, to aiding stroke recovery, to reducing age-related memory loss and the risk of both Alzheimer’s and non-Alzheimer’s dementia in the elderly . Multiple studies show that exercise is a valuable therapy for Alzheimer’s patients that can improve intellectual capacities, social functions, emotional states, and caregiver distress .

The digest was considered semi-specific and up to 3 missed cleavages were allowed

Similar results were observed for EGFR degradation, with no major proteome-wide changes occurring and EGFR being virtually the only proteins significantly down regulated in CXCL12- Ctx treatment compared to control in both the surface-enriched and whole cell proteomics . Interestingly, a previously published proteomics dataset of LYTAC-mediated EGFR degradation identified additional proteins significantly up- or down-regulated following LYTAC treatment.Comparing to our experiment in the same cell line suggests that KineTACs are more selective in degrading EGFR. As there is large overlap in peptide IDs observed between the two datasets, the observed greater selectivity is not due to lack of sensitivity of the KineTAC proteomics experiment . CXCR4 and CXCR7 peptide IDs were not altered in the surface-enriched sample, and CXCR4 IDs were also unchanged in the whole cell sample, indicating that treatment with KineTAC does not significantly impact CXCR4 or CXCR7 levels. Furthermore, protein levels of GRB2 and SHC1, which are known interacting partners of EGFR4 , were also not significantly changed. Together, these data demonstrate the exquisite selectivity of KineTACs for degrading only the target protein and not inducing unwanted, off-target proteome wide changes. To elucidate whether KineTAC-mediated degradation could impart functional cellular consequences, cell viability of HER2 expressing cells was measured following treatment with CXCL12-Tras. MDA-MB-175VII breast cancer cells are reported to be sensitive to trastuzumab treatment, and as such serve as an ideal model to test the functional consequence of degrading HER2 compared to inhibition with trastuzumab IgG.To this end, cells were treated with either CXCL12-Tras or trastuzumab IgG for 5 days,vertical tower planter after which the cell viability was determined using a modified MTT assay. Reduction in cell viability was observed at higher concentrations of CXCL12-Tras and was significantly greater than trastuzumab IgG alone .

These data demonstrate that KineTAC-mediated degradation has functional consequences in reducing cancer cell viability in vitro and highlights that KineTACs could provide advantages over traditional antibody therapeutics which bind but do not degrade. Finally, we asked whether KineTACs would have similar antibody clearance to IgGs in vivo. To this end, male nude mice were injected intravenously with 5, 10, or 15 mg/kg CXCL12- Tras, which is a typical dose range for antibody xenograft studies. Western blotting analysis of plasma antibody levels revealed that the KineTAC remained in plasma up to 10 days post-injection with a half-life of 8.7 days, which is comparable to the reported half-life of IgGs in mice .Given the high homology between human and mouse CXCL12, we tested whether human CXCL12 isotype could be cross-reactive. Human CXCL12 isotype binding to mouse cell lines MC38 and CT26, which endogenously express mouse CXCR7, was confirmed . Together, these results demonstrate that KineTACs have favorable stability and are not being rapidly cleared despite cross-reactivity with mouse CXCR7 receptors. Since atezolizumab is also known to be cross-reactive, CXCL12-Atz ability to degrade mouse PD-L1 was tested in both MC38 and CT26. Indeed, CXCL12-Atz mediates near complete degradation of mouse PD-L1 in both cell lines . Thus, PD-L1 degradation may serve as an ideal mouse model to assay the efficacy of KineTACs in vivo. Having demonstrated the ability of KineTACs to mediate cell surface protein degradation, we next asked whether KineTACs could also be applied towards the degradation of soluble extracellular proteins. Soluble ligands, such as inflammatory cytokines and growth factors, have been recognized as an increasingly important therapeutic class.

Of these, vascular endothelial growth factor and tumor necrosis factor alpha represent the most targeted soluble ligands by antibody and small molecule drug candidates, highlighting their importance in disease.Thus, we chose VEGF and TNFa as ideal proof-of-concept targets to determine whether KineTACs could be expanded to degrading extracellular soluble ligands . First, we targeted VEGF by incorporating bevacizumab , an FDA approved VEGF inhibitor, into the KineTAC scaffold . Next, HeLa cells were incubated with VEGF-647 or VEGF-647 and CXCL12-Beva for 24 hr. Following treatment, flow cytometry analysis showed a robust increase in cellular fluorescence when VEGF-647 was co-incubated with CXCL12-Beva, but not bevacizumab isotype which lacks the CXCL12 arm . To ensure that the increased cellular fluorescence was due to intracellular uptake of VEGF-647 and not surface binding, we determined the effect of an acid wash which removes any cell surface binding after 24 hr incubation . We found that there was no significant difference in cellular fluorescence levels between acid and normal washed cells. This data suggests that KineTACs successfully mediate the intracellular uptake of extracellular VEGF. Similar to membrane protein degradation, KineTAC-mediated uptake of VEGF occurs in a time-dependent manner, with robust internalization occurring before 6 hrs and reaching steady state by 24 hrs . Furthermore, the levels of VEGF uptake are dependent on the KineTAC:ligand ratio and saturate at ratios greater than 1:1 . We next tested the ability of CXCL12-Beva to promote uptake on other cell lines and find that these cells also significantly uptake VEGF . Moreover, the extent of uptake is correlated with the transcript levels of CXCR7 in these cells . These data suggest that KineTACs directed against soluble ligands can promote broad tissue clearance of these targets as compared to glycan- or Fc-mediated clearance mechanisms. To demonstrate the generalizable nature of the KineTAC platform for targeting soluble ligands, we next targeted TNFa by incorporating adalimumab , an FDA approved TNFa inhibitor, into the KineTAC scaffold . Following 24 hr treatment of HeLa cells, significant increase in cellular fluorescence was observed when TNFa-647 was coincubated with CXCL12-Ada compared to adalimumab isotype .

Consistent with the VEGF uptake experiments, acid wash did not alter the level of cellular fluorescence increase observed, and uptake was dependent on the KineTAC:ligand ratio . Thus, KineTACs are generalizable in mediating the intracellular uptake of soluble ligands, significantly expanding the target scope of KineTAC-mediated targeted degradation.In summary, our data suggest that KineTACs are a versatile and modular targeted degradation platform that enable robust lysosomal degradation of both cell surface and extracellular proteins. We find that KineTAC-mediated degradation is driven by recruitment of both CXCR7 and target protein, and that factors such as binding affinity, epitope, and construct design can affect efficiency. Other factors, such as signaling competence and pH dependency for the protein of interest, did not impact degradation for CXCL12 bearing KineTACs. These results provide valuable insights into how to engineer effective KineTACs going forward. Furthermore, we show that KineTACs operate via time-, lysosome-, and CXCR7-dependence and are exquisitely selective in degrading target proteins with minimal off-target effects. Initial experiments with an alternative cytokine, CXCL11, highlight the versatility of the KineTAC platform and the exciting possibility of using various cytokines and cytokine receptors for targeted lysosomal degradation. KineTACs are built from simple genetically encoded parts that are readily accessible from the genome and published human antibody sequences. Given differences in selectivity and target scope that we and others have observed between degradation pathways,lettuce vertical farming there is an ongoing need to co-opt novel receptors for lysosomal degradation, such as CXCR7, that may offer advantages in terms of tissue selectivity or degradation efficiency. Thus, we anticipate ongoing work on the KineTAC platform to offer new insights into which receptors can be hijacked and to greatly expand targeted protein degradation to the extracellular proteome for both therapeutic and research applications.SILAC proteomics data were analyzed using PEAKSOnline . For all samples, searches were performed with a precursor mass error tolerance of 20 ppm and a fragment mass error tolerance of 0.03 Da. For whole cell proteome data, the reviewed SwissProt database for the human proteome was used. For surface enriched samples, a database composed of SwissProt proteins annotated “membrane” but not “nuclear” or “mitochondrial” was used to ensure accurate unique peptide identification for surface proteins, as previously described.Carbamidomethylation of cystine was used as a fixed modification, whereas the isotopic labels for arginine and lysine, acetylation of the N-terminus, oxidation of methionine, and deamidation of asparagine and glutamine were set as variable modifications. Only PSMs and protein groups with an FDR of less than 1% were considered for downstream analysis. SILAC analysis was performed using the forward and reverse samples, and at least 2 labels for the ID and features were required. Proteins showing a >2-fold change from PBS control with a significance of P<0.01 were considered to be significantly changed.Cell viability assays were performed using an MTT modified assay. In brief, on day 0 15,000 MDA-MB-175VII cells were plated in each well of a 96-well plate. On day 1, bispecifics or control antibodies were added in a dilution series. Cells were incubated at 37ºC under 5% CO2 for 5 days. On day 6, 40 µL of 2.5 mg/mL thiazolyl blue tetrazolium bromide was added to each well and incubated at 37ºC under 5% CO2 for 4 hrs. 100 µL of 10% SDS in 0.01M HCl was then added to lyse cells and release MTT product.

After 4 hrs at room temperature, absorbance at 600 nm was quantified using an Infinite M200 PRO plate reader . Data was plotted using GraphPad Prism software and curves were generated using non-linear regression with sigmoidal 4PL parameters. Male nude nu/nu mice were treated with 5, 10, or 15 mg/kg CXCL12-Tras via intravenous injection . Blood was collected from the lateral saphenous vein using EDTA capillary tubes at day 0 prior to intravenous injection and at days 3, 5, 7, and 10 post injection. Plasma was separated after centrifugation at 700xg at 4ºC for 15 min. To determine the levels of CXCL12-Tras, 1 µL of plasma was diluted into 30 µL of NuPAGE LDS sample buffer and loaded onto a 4-12% Bis-Tris gel and ran at 200V for 37 min. The gel was incubated in 20% ethanol for 10 min and transferred onto a polyvinylidene difluoride membrane. The membrane was washed with water followed by incubation for 5 min with REVERT 700 Total Protein Stain . The blot was then washed twice with REVERT 700 Wash Solution and imaged using an OdysseyCLxImager . The membrane was then blocked in PBS with 0.1% Tween-20 + 5% bovine serum albumin for 30 min at room temperature with gentle shaking. Membranes were incubated overnight with 800 CW goat anti-human IgG at 4ºC with gentle shaking in PBS + 0.2% Tween- 20 + 5% BSA. Membranes were washed four times with tris-buffered saline + 0.1% Tween-20 and then washed with PBS. Membranes were imaged using an OdysseyCLxImager . Band intensities were quantified using Image Studio Software .The concept of targeted degradation has emerged in the last two decades as an attractive alternative to conventional inhibition. Small molecule inhibitors primarily work through occupancy-driven pharmacology, resulting in temporary inhibition in which the therapeutic effect is largely dependent on high potency. On the other hand, PROteolysis TArgeting Chimeras utilize event-driven pharmacology to degrade proteins in a catalytic manner.Traditionally, PROTACs are heterobifunctional small molecules composed of a ligand binding a protein of interest chemically linked to a ligand binding an E3 ligase. The recruitment of an E3 ligase enables the transfer of ubiquitin onto the protein of interest, which is subsequently polyubiquitinated and recognized by the proteasome for degradation . In many cases, PROTACs have proven efficacious over the small molecule inhibitors alone, and several candidate PROTACs have progressed to clinical trials for treating human cancers and other diseases. Despite these successes, small molecule PROTACs are largely limited to targeting intracellular proteins. Given this challenge, there is a need for novel technologies that expand the scope of targeted degradation to membrane proteins. Recently, our lab has developed a method termed antibody-based PROTACs which utilize bispecific antibody scaffolds to bring membrane-bound E3 ligases in proximity to a membrane protein of interest for targeted degradation.Thus far, AbTACs have shown success in using bispecific IgGs to recruit E3 ligase RNF4 to programmed death ligand 1 for efficient lysosomal degradation. This data suggests that it is possible to use bispecific antibodies to degrade membrane proteins for which antibodies already exist or that have characteristics amenable to recombinant antibody selection strategies.However, the ability to degrade multipass membrane proteins, such as GPCRs, remains challenging due to few extracellular-binding antibodies existing for this target class. Here, we describe a novel approach to expand the scope of AbTACs to targeting multi-pass membrane proteins. This approach, termed antibody-drug conjugate PROTACs , comprises of an antibody targeting a cell surface E3 ligase chemically conjugated to a small molecule that specifically binds the protein of interest .

VLE identified focal areas of concern in 77% of BE procedures

All patients underwent standard of care endoscopy including WLE in accordance with their institution’s standard procedures followed by VLE examination. Sample VLE features relevant to normal and abnormal structures in the esophagus were used as a general guideline to interpret VLE images in the study .Investigators were trained on the use of the technology and supported as needed onsite and offsite by technical experts from the sponsor throughout the study. VLE scans were registered longitudinally and rotationally with the WLE image of the esophagus. When a lesion was identified on VLE,the investigator would triangulate the location of the lesion by recording the distance and clock face registered with the WLE orientation. This information then was used to guide the investigator to acquire the tissue using WLE. At the time of the study, this was the method that was available to target a tissue site for sampling. Additional procedure details can be found in Supplementary Material A. Following VLE, each investigator performed any desired diagnostic or therapeutic actions based on their standard of care according to WLE and advanced imaging findings. Highest grade of disease on the pathology results was recorded for advanced imaging guided tissue acquisition, targeted endoscopic tissue acquisition, and random biopsies. VLE guided tissue acquisition refers to the subgroup of advanced imaging guided tissue biopsy or resection specimens where only VLE imaging was used to identify the areas of interest. Investigators were given a questionnaire post procedure and data were collected as to the clinical workflow and utility of the VLE images. The questions included whether VLE guided either their tissue sampling or therapeutic decisions for each patient,vertical indoor hydroponic system and whether VLE identified suspicious areas not seen on WLE or other advanced imaging modalities.

Descriptive statistics were used for quantitative analyses in the study. In light of the vast majority of registry patients having suspected or confirmed BE, the investigators elected to focus initial analysis on this group and to assess potential roles of VLE in BE management. Suspected BE refers to patients with no prior histologic confirmation of BE who had salmon colored mucosa found on endoscopic examination with WLE. The analysis focused on the incremental diagnostic yield improvement of VLE as an adjunct modality on top of the standard of care practice. Procedures with confirmed neoplasia were included in the analysis. The procedures were divided into subgroups according to whether the tissue acquisition method was VLE targeted. Dysplasia diagnostic yields were calculated using the number of procedures in each subgroup and total number of procedures in patients with previously diagnosed or suspected BE. Negative predictive value analysis in patients with prior BE treatment evaluated the utility of VLE on top of the standard of care surveillance to predict when there is no dysplasia present. Procedures with negative endoscopy findings and negative VLE findings but with tissue acquisition performed were included in the analysis and NPVs for both SoC and SoC + VLE were calculated. The primary evaluation focused on HGD and cancer since the recommended image interpretation criteria were validated for detecting BE-related neoplasia,and treatment is recommended for patients with neoplasia per existing guidelines.From August 2014 through April 2016, 1000 patients were enrolled across 18 trial sites . The majority of patients were male , with a mean age of 64 years . A total of 894 patients had suspected or confirmed BE at the time of enrollment including 103 patients with suspected BE and 791 patients with prior histological confirmation. Of the confirmed BE patients, 368 had BE with neoplasia, 170 had BE with low grade dysplasia , 49 had BE indefinite for dysplasia , and 204 had nondysplastic BE .

A total of 56% of patients had undergone prior endoscopic or surgical interventions for BE including RFA, Cryo, and EMR . Post-procedure questionnaires were completed for all procedures in patients with previously diagnosed or suspected BE . In over half of the procedures, investigators identified areas of concern not seen on either WLE or other advanced imaging modalities. Both VLE and endoscopic BE treatment were performed in 352 procedures. VLE guided the intervention in 52% of these procedures. In 40% of procedures, the depth or extent of disease identified on VLE aided the selection of a treatment modality. Neoplasia was confirmed on tissue sampling performed in 76 procedures within the cohort of patients with previously diagnosed or suspected BE . Among these procedures, VLE-guided tissue acquisition alone found neoplasia in 26 procedures , with an additional case where HGD on random forceps biopsy was upstaged to IMC on VLE-targeted sampling. Histology from these procedures included 16 HGD, 5 IMC, and 6 EAC. Thus, VLE-guided tissue acquisition as an adjunct to standard practice detected neoplasia in an additional 3% of the entire cohort of patients with previously diagnosed or suspected BE, and improved the diagnostic yield by at least 55% . Of the 894 BE patients, 393 had no prior history of esophageal therapy. Mean Prague classification score for this cohort were C = 2.3 cm , M = 4.1 cm . In 199 of these treatment na¨ıve patients, VLE identified at least one focally suspicious area not appreciated during either WLE or other advanced imaging evaluation. Neoplasia was confirmed on histology in 24 procedures . In of these procedures, VLE alone identified neoplasia as all random biopsies for these patients were negative. Additionally, one casewhere HGD was found on random forceps biopsy was upstaged to IMC on VLE-targeted sampling. In this group, VLE-guided tissue acquisition increased neoplasia detection by 700% . For these untreated BE patients, VLE-guided tissue acquisition as an adjunct to standard practice detected neoplasia in an additional 5.3% of procedures .

The number needed to test with VLE to identify neoplasia not detected with standard of care technique was 18.7. An average of 1.7 additional sites per patient required targeted tissue acquisition when suspected regions were identified using VLE compared to an average of 11 random biopsies per patient. A sub-analysis was conducted in the 238 patients with prior BE treatment and either no visible BE or irregular z-line. From this group, 82% had no focally suspicious findings on WLE examination, where two procedures were subsequently diagnosed with neoplasia . Thus, the NPV for WLE was 99% for neoplasia. When combining WLE/NBI with VLE as an adjunct, we found that 49% of the post-treatment procedures had no suspicious WLE or VLE findings. Neoplasia was found in none of these procedures, corresponding to a negative predictive value of 100% .Advanced imaging techniques including high definition-WLE, NBI, CLE, and chromoendoscopy have continued to improve the evaluation of Barrett’s esophagus. However, these provide only superficial epithelial evaluation. VLE breaks this boundary by imaging the mucosa, submucosa, and frequently,vertical farming tower for sale down to the muscular is propria. It does so while evaluating a large tissue area in a short period of time without sacrificing resolution. This 1000-patient multi-center registry assessed the clinical utility of VLE for the management of esophageal disorders and has demonstrated its potential as an adjunct tool for detecting disease. Abnormalities were found on VLE which were not seen with other imaging in over half of the procedures. Endoscopists using VLE in this study felt that it guided tissue acquisition in over 70% of procedures and BE treatment in the majority of procedures where interventions were performed. VLE visualization of subsurface tissue structures allows comprehensive morphological evaluation, resulting in physicians reporting suspicious areas only seen on VLE when other advanced imaging modalities were also used in more than half of procedures. Although subjective, these results still provide useful insight into the physicians’ perception of the technology. This study found that VLE as an adjunct modality increased neoplasia diagnosis by 3%, and improved the neoplasia diagnostic yield by 55% over standard practice and other advanced imaging modalities. For a treatment na¨ıve population with no focally suspicious regions found on WLE, VLE-guided tissue acquisition improved neoplastic diagnostic yield by 700%. This finding is impressive, particularly as these procedures were performed prior to the release of a real time laser marking system.

Laser marking has since been evaluated by Alshelleh et al., who found a statistically significant improvement of neoplasia yield using the VLE laser marking system compared to the standard Seattle protocol.In this registry, an additional 2.3 sites per patient on average required guided biopsy or resection when suspected regions were identified using VLE, while an average of 15.8 random biopsies per patient were performed in the cohort of patients with previously diagnosed or suspected BE . In general, higher tissue sampling density leads to an increased chance of detecting dysplasia due to its focal nature, therefore taking additional biopsies should increase the diagnostic yield. However, the potential for advanced imaging such as VLE to provide targeted, high yield biopsies could reduce the total number of biopsies necessary to adequately evaluate the diseased mucosa with the Seattle protocol. The combination of a focally unremarkable WLE and VLE examination provided a negative predictive value of 100% for neoplasia in post-treatment population. Although not reaching statistical significance due to limited sample size, these early results provide promise for the utility of VLE to better predict when there is no disease present, i.e. a ‘clean scan.’ Such a tool could then potentially allow for extended surveillance intervals reducing the number of endoscopies to manage the patient’s needs. The utility of this analysis is subject to several limitations. As a post-market registry study, there was no defined protocol for imaging, image interpretation and tissue acquisition, and there was no control group for matched population comparisons. The early experience of users on VLE image interpretation may have resulted in over calling areas of concern. Abnormalities located deeper in the esophageal wall could be targeted with forceps biopsies at one site, while other sites would utilize endoscopic resection techniques that are more likely to remove the target. All of these discrepancies could affect any calculations regarding the adjunctive yield of VLE-targeted sampling. Further analysis of the global detection rate of dysplasia by site did not reveal any statistical difference. At the time of this study, image interpretation was performed using previously published guidelines for detection of neoplasia in Barrett’s esophagus with OCT.Challenges with histopathological diagnosis of LGD limited the development of VLE criteria for LGD. As such, the analyses in this study focused on neoplasia. Current guidelines suggest that treatment of LGD is acceptable so detection of LGD with VLE should be addressed in a future study. Additionally, the characteristic image features that maximize sensitivity and specificity of confirmatory biopsies must be optimized. Recently, Leggett et al. established an updated step-wise diagnostic algorithm to detect dysplasia based on similar VLE features used in this study.This diagnostic algorithm achieved 86% sensitivity, 88% specificity, and 87% diagnostic accuracy to detect BE dysplasia with almost perfect interobserver agreement among three raters .Further optimization of VLEimage features for identifying dysplasia and neoplasia are ongoing . Other limitations of the study include the lack of central pathology for interpretation of specimens, which could affect the reported benefit of VLE in finding dysplasia. However, this manuscript focuses on neoplasia where there is less inter observer variability compared to lowgrade dysplasia. Finally, as a non-randomized study conducted mostly at large BE referral centers with possibly higher pre-test probability of neoplasias, it is plausible that their validity in a community setting is limited. However, the large sample size, its heterogeneity, plus variation in technique by site likely restore at least some of the external validity of the findings. This registry-based study demonstrates the potential for VLE to fill clinically relevant gaps in our ability to evaluate and manage BE. Physicians perceived significant value of VLE across the BE surveillance and treatment paradigm. Biopsy confirmation demonstrated benefits of VLE for both treatment na¨ıve and post treatment surveillance, although pathology results did not always align with physician perception, most likely due to limitations of the technology and image criteria at the time of study. Given expected refinement and validation of image interpretation, and the availability of laser-marking for more accurate biopsy targeting, VLE is well positioned to enhance our ability to identify and target advanced disease and enable a more efficient endoscopic examination with higher yield of tissue acquisition.VLE is well positioned to enhance our ability to identify and target advanced disease and enable a more efficient endoscopic examination with higher yield of tissue acquisition of burnout among workers.

The program included nine in-class nutrition lessons coordinated with garden activities

These spheres of influence are multifaceted and include factors such as income, ethnicity and cultural values and settings such as schools and retail food establishments. Consequently, measurable progress in reducing childhood obesity requires a multifaceted approach: a coordinated, comprehensive program that integrates messages regarding nutrition, physical activity and health with a child’s immediate environment and surrounding community . Adequate access to healthy food and physical recreation opportunities is essential to promote sustained behavior changes . Schools and after-school programs provide a unique setting for this approach, as they provide access to children, parents, families, educators, administrators and community members . The purpose of this article is to examine garden-enhanced nutrition education and Farm to School programs. Further, a questionnaire was developed and distributed to UC Cooperative Extension advisors and directors to assess their role in garden-enhanced nutrition education and Farm to School programs. Results from this questionnaire highlight UCCE’s integral role in this field.School gardens were first implemented in the United States at the George Putnam School in Roxbury, Massachusetts, in 1890, and by 1918 there was at least one in every state . During World Wars I and II, more than a million children were contributing to U.S. food production with victory gardens, which were part of the U.S. School Garden Army Program . More recently, incorporating gardens into the educational environment has become more popular worldwide,plastic pots 30 liters due partly to the appreciation of the importance of environmental awareness and integrated learning approaches to education .

As the agricultural powerhouse of the nation , California is poised to serve as a model for agriculture-enhanced nutrition and health education. Within California, the impetus to establish gardens in every school gained momentum in 1995, when then-State Superintendent of Public Instruction Delaine Eastin launched an initiative to establish school gardens as learning laboratories or outdoor classrooms . Assembly Bill 1535 created the California Instructional School Garden Program, allowing the California Department of Education to allocate $15 million for grants to promote, develop and sustain instructional school gardens. About 40% of California schools applied for these grants, and $10.9 million was awarded . It has been repeatedly shown that garden-enhanced nutrition education has a positive effect on children’s fruit and vegetable preferences and intakes . For example, after a 17-week standards-based, garden-enhanced nutrition education program, fourth-grade students preferred a greater variety of vegetables than did control students.For example, students learned that plants and people need similar nutrients. Many of these improvements persisted and were maintained at a 6-month follow-up assessment . In a similar study of a 12-week program combining nutrition lessons with horticulture, sixth grade students likewise improved their vegetable preferences and consumption . In addition, after a 13-week garden-enhanced nutrition program, middle school children ate a greater variety of vegetables than they had initially . While garden-enhanced nutrition education is one innovative method to improve children’s vegetable preferences and intake, researchers and educators consistently call for multi-component interventions to have the greatest impact on student health outcomes. Suggested additional components include classroom education, Farm to School programs, healthy foods available on campus, family involvement, school wellness policies and community input .

Moreover, the literature indicates that providing children with options to make healthy choices rather than imposing restrictions has long-term positive effects on weight . Taken together, it is reasonable to suggest that we are most likely to achieve long-lasting beneficial changes by coordinating a comprehensive garden-enhanced nutrition education program with school wellness policies, offering healthy foods on the school campus, fostering family and community partnerships and incorporating regional agriculture.Farm to School programs connect K-12 schools and regional farms, serving healthy, local foods in school cafeterias or classrooms. General goals include improving student nutrition; providing agricultural, health and nutrition education opportunities; and supporting small and mid-sized local and regional farms . Born through a small group of pilot projects in California and Florida in the late 1990s, Farm to School is now offered in all 50 states, with more than 2,000 programs nationwide in 2010 . The dramatic increase in the number and visibility of Farm to School programs can likely be attributed to factors including heightened public awareness of childhood obesity, expanding access to local and regional foods in school meals, concerns about environmental and agricultural issues as well as the sustainability of the U.S. food system. Farm to School programs provide a unique opportunity to address both nutritional quality and food system concerns. From a nutrition and public health standpoint, these programs improve the nutritional quality of meals served to a large and diverse population of children across the country. From a food systems and economic perspective, Farm to School programs connect small and mid-sized farms to the large, stable and reliable markets created by the National School Lunch Program .

Farm to School programs require partnerships that include a state or community organization, a local farmer or agricultural organization, a school nutrition services director and parents. Historically, Farm to School programs are driven, supported and defined by a community. Because they reflect the diverse and unique communities they serve, individual Farm to School programs also vary from location to location, in addition to sharing the characteristics described above. The first national Farm to School programs were initiated in 2000 and soon gained momentum in California, with support from the USDA Initiative for Future Agriculture and Food Systems as well as the W.K. Kellogg Foundation. In 2005, Senate Bill 281 established the California Fresh Start Program to encourage and support additional portions of fresh fruits and vegetables in the School Breakfast Program. This bill also provided the California Department of Education with $400,000 for competitive grants to facilitate developing the California Fresh Start Program . Concomitant with the growth of Farm to School programs, the National Farm to School Network was formed in 2007 with input from over 30 organizations and today engages food service, agricultural and community leaders in all 50 states. The evolution of this network has influenced school food procurement and nutrition/ food education nationwide .Evaluations of Farm to School impact have been conducted since the program’s inception. A 2008 review of 15 Farm to School evaluation studies, which were conducted between 2003 and 2007, showed that 11 specifically assessed Farm to School–related dietary behavior changes . Of these 11 studies, 10 corroborated the hypothesis that increased exposure to fresh Farm to School produce results in positive dietary behavior changes. In addition, a 2004-2005 evaluation of plate waste at the Davis Joint Unified School District salad bar showed that 85% of students took produce from the salad bar and that 49% of all selected salad bar produce was consumed . Additionally, school record data demonstrates that throughout the 5 years of the 2000-to-2005 Farm to School program, overall participation in the school lunch program ranged from a low of 23% of enrollment to a high of 41%, with an overall average of 32.4%. This compared to26% participation before salad bars were introduced. Overall participation in the hot lunches averaged 27% of enrollment . While Farm to School evaluations generally indicate positive outcomes ,round plastic pots conclusive statements regarding the overall impact of such programs on dietary behavior cannot be made. This can be attributed to the substantial variation in Farm to School structure from district to district, and variation in the study design and methodologies of early program evaluations. Methods for evaluating dietary impact outcomes most commonly include using National School Lunch Program participation rates and food production data as proxies for measuring consumption.

Additional evaluation methods include using self-reported measures of consumption such as parent and student food recalls or frequency questionnaires, and direct measures of consumption such as school lunch tray photography and plate waste evaluation. There are relatively few studies using an experimental design to evaluate the impact of Farm to School programs on fruit and vegetable intake, and even fewer of these studies use controls. Moreover, the Farm to School evaluation literature has no peer-reviewed dietary behavior studies using a randomized, controlled experimental design, which is undoubtedly due to the complex challenges inherent in community research. For example, schools may view the demands of research as burdensome or may question the benefits of serving as control sites. Due partly to its year-round growing season, California has more Farm to School programs than most, if not all, states. UC Davis pioneered some of the early uncontrolled studies quantifying Farm to School procurement, costs and consumption. UC ANR is now conducting new controlled studies to collect more rigorous data, which will differentiate outcomes of Farm to School programs from those due to other environmental factors. To clarify the role of UC ANR in garden-based nutrition education and Farm to School programs, a questionnaire was developed and administered through Survey Monkey in November 2011. This survey was sent to 60 UCCE academic personnel, including county directors; Nutrition, Family and Consumer Sciences advisors; 4-H Youth Development advisors; and others. For the purposes of this questionnaire, Farm to School was broadly defined as a program that connects K-12 schools and local farms and has the objectives of serving healthy meals in school cafeterias; improving student nutrition; providing agriculture, health and nutrition education; and supporting local and regional farmers. Survey. A cover letter describing the purpose of the survey and a link to the questionnaire was emailed to representatives from all UCCE counties. The questionnaire was composed of 26 items that were either categorical “yes/no/I’m not sure” questions or open-ended questions allowing for further explanation. An additional item was provided at the end of the questionnaire for comments. Respondents were instructed to return the survey within 11 days. A follow-up email was sent to all participants after 7 days. This protocol resulted in a 28% response rate, typical in a survey of this kind. Respondents represented 21 counties, with some representing more than one county; in addition, one was a representative from a campus-based unit of ANR. Questionnaire respondents included three county directors, six NFCS advisors, four 4-HYD advisors, one NFCS and 4-HYD advisor, and three other related UCCE academic personnel . The responding counties were Riverside, San Mateo and San Francisco; San Bernardino, Stanislaus and Merced; Contra Costa, Yolo, Amador, Calaveras, El Dorado and Tuolumne; Mariposa, Butte, Tulare, Alameda, ShastaTrinity, Santa Clara, Ventura and Los Angeles. Farm to School and school gardens. All 21 counties responding to the survey reported that they had provided a leadership role in school gardens, after-school gardens and/or Farm to School programs during the previous 5 years . Five out of 17 respondents reported that their counties provided a leadership role in Farm to School programs. Fourteen out of 17 respondents indicated that they individually played a leadership role in school garden programs, including serving as a key collaborator on a project, organizing and coordinating community partners, acting as school/agriculture stakeholders and/or serving as a principal investigator, coprincipal investigator or key collaborator on a research study. The most frequently reported reasons for having school and after-school gardens were to teach nutrition, enhance core academic instruction and provide garden produce . Additional reasons cited in the free responses included to study the psychological impacts of school gardens, enhance science and environmental education, teach composting, increase agricultural literacy, teach food origins, participate in service learning and provide a Gardening Journalism Academy. Reasons for success. The factors most frequently cited as contributing to successful school and after-school garden and Farm to School programs were community and nonparent volunteers, outside funding and enthusiastic staff . The 17 respondents indicated that the success of these programs was also aided by the multidisciplinary efforts within UC ANR , Farm Bureau, Fair Board and 4-H Teens as Teachers. Barriers. The most common factors cited as barriers to school and after-school gardens and Farm to School programs were lack of time and lack of knowledge and experience among teachers and staff . Additional barriers included lack of staff, cutbacks, competing programs for youth and lack of after-school garden-related educational materials for mixed-age groups. With regard to the Farm to School programs, one respondent perceived increased expense to schools, absence of tools to link local farmers with schools, a lack of growers and a lack of appropriate facilities in school kitchens.

Forward osmosis technology is also commonly used for food and drug processing

Area with high N2O emission has a relatively lower oxygen concentration due to the expansion of nutrients runoff from land. To diminish the negative environmental impacts, fertigation treatment could reduce the amount of nitrogen and nutrients input to the soil, prevent over fertilization, and excess nutrient runoff to the river. Forward osmosis has many advantages regard saving physical footprints. High waste water recovery rate, minimized resupply, and low energy cost can facilitate the sustainability of forward osmosis. However, forward osmosis has a lower membrane fouling propensity compared to other pressure-driven membrane processes. Forward osmosis is usually applied as pretreatment of reverse osmosis, the total energy consumption of a combination of FO and RO is lower than reverse osmosis alone. Moreover, osmotic backwashing can be compelling to restrict the membrane while reducing energy consumption at the same time. In the situation when Nanofiltration served as post-treatment combined with fertilizer draw forward osmosis can backwash the excess fertilizer replenishment and turn it into concentrated fertilizer draw solutions. The energy consumption of FDFO brackish water recovery using cellulose triacetate is affected by draw solution concentration , flow rates ,fodder systems for cattle and membrane selection. Membrane orientation and the flow rates have a minor effect on specific energy consumption compared to draw solution concentration. A diluted fertilizer draw solution can boost the system’s performance while a higher draw solution concentration can lower the specific energy consumption.

Moreover, a lower flow rate with a higher draw solution concentration can diminish the energy consumption of fertilizer draw forward osmosis to the lowest. This additional process would increase the energy consumption of the system. However, nanofiltration is necessary for desalination and direct fertigation treatment.The energy consumption of the nanofiltration process is determined by the environmental impacts, such as recovery rate, membrane lifetime, and membrane cleaning. Forward osmosis technology performs a 40-50% reduction in specific energy consumption compared to other alternatives. As a result, FO technology has the potential for wide adoption in drinking water treatment. Another area of application of FO usage is seawater desalination/brine removal, direct fertigation, wastewater reclamation, and wastewater minimization. Without the draw solution recovery step, forward osmosis could be applied as osmotic concentration. For example,fertilizer-draw forward osmosis is widely accepted for the freshwater supply and direct fertigation. However, in terms of the evaporative desalination process, it is more practical to treat the water with a lower total dissolved solid /salinity. Forward osmosis technology can be combined with other treatment methods such as reverse osmosis, nanofiltration, or ultrafiltration for different water treatment purposes. To be more specific, forward osmosis can be an alternative pre-treatment in conventional filtration/separation system ; an alternative process to conventional membrane treatment system ; a post-treatment process to recycle the volume of excess waste . The standalone forward osmosis process usually combines with additional post-treatment to meet the water quality standards for different purposes.

Forward osmosis has been researched in the past. In this review, we focused on fertilizer drawn forward osmosis, which can not only remove brine but also reduce multiple nutrient inputs such as nitrogen, phosphorous, potassium, and so on. Since a proper draw solution can reduce the concentration polarization, the draw solution selection becomes vital for both FO and FDFO processes. Moreover, different fertilizer draws solutions have various influences on energy consumption. The nutrient concentrations of treated water are controllable using the fertilizer-drawn forward osmosis treatment method. The composition of nutrients can be adjusted in the draw solution to produce water with different ratios of nutrients, which makes fertilizer draw forward osmosis a nearly perfect treatment method for direct fertigation. For the purpose of reducing N2O emissions, the removal rate of nitrogen in fertigation water is required to be improved using fertilizer drawn forward osmosis and nanofiltration. When nanofiltration is applied as post-treatment with fertilizer drawn forward osmosis, the nitrogen removal rate can reach up to 82.69% while using SOA as the draw solution. This number shows that treatment of fertigation can reach a higher standard of water quality attenuating nitrogen concentrations. As a result, lower nitrogen input in fertigation can significantly decrease the nitrous oxide emission from the soil for sustainable agricultural use. Forward osmosis can be also combined with other treatment methods to resolve the freshwater shortage problem. Despite the traditional seawater desalination treatment incorporating forward osmosis and reverse osmosis, the hybrid process of reverse osmosis and fertilizer drawn forward osmosis can remove the brine from water and lower the final nutrient concentration with a higher recovery rate. Lastly, the value of water flux, recirculation rate, draw solution concentration, membrane lifetime, and membrane cleaning can all be adjusted to minimize energy consumption as much as possible. In conclusion, FO and FDFO technologies are both environmentally friendly and economically for desalination and fertigation.

Evapotranspiration estimation is important for precision agriculture, especially precision water management. Mapping the ET temporally and spatially can identify variations in the field, which is useful for evaluating soil moisture and assessing crop water status. ET estimation can also benefit water resource management and weather forecast. ET is a combination of two separate processes, evaporation and transpiration . Evaporation is the process whereby liquid water is converted to water vapor through latent heat exchange. Transpiration is the process of the vaporization of liquid water contained in plant tissues,fodder sprouting system and the vapor removal to the atmosphere. The current theory for transpiration is constituted by the following three steps. First, the conversion of liquid-phase water to vapor water causes canopy cooling from latent heat exchange. Thus, canopy temperature can be used as an indicator of ET. Second, diffusion of water vapor from inside plant stomata on the leaves to the surrounding atmosphere. Third, atmospheric air mixing by convection or diffusion transports vapor near the plant surfaces to the upper atmosphere or off-site away from the plant canopy. Usually, evaporation and transpiration occur simultaneously.These direct ET methods, however, are usually point-specific or area-weighted measurements and cannot be extended to a large scale because of the heterogeneity of the land surface. The experimental equipment is also costly and requires substantial expense and effort, such as lysimeters, which are only available for a small group of researchers. For indirect methods, there are energy balance methods and remote sensing methods. For energy balance methods, Bowen ratio and eddy covariance have been widely used in ET estimation. However, they are also area-weighted measurements. Remote sensing techniques can detect variations in vegetation and soil conditions over space and time. Thus, they have been considered as some of the most powerful methods for mapping and estimating spatial ET over the past decades. Remote sensing models have been useful in accounting for the spatial variability of ET at regional scales when using satellite platforms such as Landsat and ASTER. Since the satellite started being applied, several remote sensing models have been developed to estimate ET, such as surface energy balance algorithm for land, mapping evapotranspiration with internalized calibration, the dual temperature difference, and the Priestley–Taylor TSEB. Remote sensing techniques can provide information such as normalized difference vegetation index , leaf area index , surface temperature, and surface albedo. Related research on these parameters has been discussed by different researchers. As a new remote sensing platform, researchers are very interested in the potential of small UAVs for precision agriculture, especially on heterogenous crops, such as vineyard and orchards.

UAVs overcome some of the remote sensing limitations faced by satellite. For example, satellite remote sensing is prone to cloud cover; UAVs are below the clouds. Unlike satellites, UAVs can be operated at any time if the weather is within operating limitations. The satellite has a fixed flight path; UAVs are more mobile and adaptive for site selection. Mounted on the UAVs, lightweight sensors, such as RGB cameras, multispectral cameras, and thermal infrared cameras, can be used to collect high-resolution images. The higher temporal and spatial resolution images, relatively low operational costs, and the nearly real-time image acquisition, make the UAVs an ideal platform for mapping and monitoring ET. Many researchers have already used UAVs for ET estimation, as shown in Table 1. For example, in Ortega-Farías et al. implemented a remote sensing energy balance algorithm for estimating energy components in an olive orchard, such as incoming solar radiation, sensible heat flux, soil heat flux, and latent heat flux. Optical sensors were mounted on a UAV to provide high spatial resolution images. By using the UAV platform, experiment results show that the RSEB algorithm can estimate latent heat flux and sensible heat flux with errors of 7% and 5%, respectively. It demonstrated that UAV could be used as an excellent platform to evaluate the spatial variability of ET in the olive orchard.There are two objectives for this paper. First, to examine current applications of UAVs for ET estimation. Second, to explore the current uses and limitations of UAVs, such as UAVs’ technical and regulatory restrictions, camera calibrations, and data processing issues. There are many other ET estimation methods, such as surface energy balance index, crop water stress index , simplified surface energy balance index, and surface energy balance system, which have not been applied with UAVs. Therefore, they are out of the scope of this article. This study is not intended to provide an exhaustive review of all direct or indirect methods that have been developed for ET estimation. The rest of the paper is organized as follows: Section 2 introduces different UAV types being used for ET estimation. Several commonly used lightweight sensors are also compared in Section 2. The ET estimation methods being used with UAV platforms, as shown in Table 1, are discussed. In Section 3, different results of ET estimation methods and models are compared and discussed. Challenges and opportunities, such as thermal camera calibration, UAV path planning, and image processing, are discussed in Section 4. Lastly, the authors share views regarding ET estimation with UAVs in future research and draw conclusive remarks. Many kinds of UAVs are used for different research purposes, including ET estimation. Some popular UAV platforms are shown in Figure 1. Typically, there are two types of UAV platforms, fixed-wings and multirotors. Fixed-wings can usually fly longer with a larger payload. They can usually fly for about 2 h, which is suitable for a large field. Multirotors can fly about 30 min, which is suitable for short flight missions. Both of them have been used in agricultural research, such as, which promises great potential in ET estimation.Mounted on UAVs, many sensors can be used for collecting UAV imagery, such as multispectral and thermal images, for ET estimation. For example, the Survey 3 camera has four bands, blue, green, red, and near-infrared , with a spectral resolution of 4608 × 3456 pixels, and a spatial resolution of 1.01 cm/pixel. The Survey 3 camera has a fast interval timer, 2 s for JPG mode, and 3 s for RAW + JPG mode. Faster interval timer would benefit the overlap design for UAV flight missions, such as reducing the flight time, and enabling higher overlapping. Another multi-spectral camera being commonly used is the Rededge M. The Rededge M has five bands, which are blue, green, red, near-infrared, and red edge. It has a spectral resolution of 1280 × 960 pixel, with a 46field of view. With a Downwelling Light Sensor , which is a 5-band light sensor that connects to the camera, the Rededge M can measure the ambient light during a flight mission for each of the five bands. Then, it can record the light information in the metadata of the images captured by the camera. After the camera calibration, the information detected by the DLS can be used to correct lighting changes during a flight, such as changes in cloud cover during a UAV flight. The thermal camera ICI 9640 P has been used for collecting thermal images as reported in. The thermal camera has a resolution of 640 × 480 pixels. The spectral band is from 7 to 14 µm. The dimensions of the thermal camera are 34 × 30 × 34 mm. The accuracy is designed to be ±2 C. A Raspberry Pi Model B computer can be used to trigger the thermal camera during flight missions. The SWIR 640 P-Series , which is a shortwave infrared camera, can also be used for ET estimation. The spectral band is from 0.9 µm to 1.7 µm. The accuracy for the SWIR camera is ±1 C. It has a resolution of 640 × 512 pixels.

Crop yields can also vary endogenously in response to demand and price changes

Typically, they allow for endogenous structural adjustments in land use, management, commodity production, and consumption in response to exogenous scenario drivers . However, with several components of productivity parameters endogenously determined, it can be difficult to isolate the potential role of livestock efficiency changes due to technological breakthroughs or policy incentives. For example, as production decreases due to decreasing demand, so could productivity. In this case, a design feature can be a design faw for sensitivity analysis and policy assessment focused on individual key system parameters, even if model results can be further decomposed to disentangle endogenous and exogenous productivity contributions . Accounting-based land sector models, such as the FABLE Calculator, which we also employ in this current study, can offer similarly detailed sector representation, without the governing market mechanisms, thus allowing fully tunable parameters for exploring policy impacts . This feature facilitates quantifying uncertainty and bounding estimates through sensitivity analyses. The FABLE Calculator is a sophisticated land use accounting model that can capture several of the key determinants of agricultural land use change and GHG emissions without the complexity of an optimization based economic model. Its high degree of transparency and accessibility also make it an appealing tool to facilitate stakeholder engagement.This paper explores the impacts of healthier diets and increased crop yields on U.S. GHG emissions and land use,dutch buckets as well as how these impacts vary across assumptions of future livestock productivity and ruminant density in the U.S. We employ two complementary land use modeling approaches.

The first is the FABLE Calculator , a land use and GHG accounting model based on biophysical characteristics of the agricultural and land use sectors with high agricultural commodity representation. The second is a spatially-explicit partial equilibrium optimization model for global land use systems . The combination of these modeling approaches allows us to provide both detailed representation of agricultural commodities with high flexibility in scenario design and a dynamic representation of land use in response to known economic forces , qualities that are difficult to achieve in a single model. Both modeling frameworks allow us to project to 2050 U.S. national scale agricultural production, diets, land-use, and carbon emissions and sequestration under varying policy and productivity assumptions. Our work makes several advances to sustainability research. First, using agricultural and forestry models that capture market and intersectoral dynamics, this is the first non-LCA study to examine the sustainability of a healthier average U.S. diet . Second, using two complementary modeling approaches, this is the first study to explore the GHG and land use effects of the interaction of healthy diets and agricultural productivity. Specifically, we examined key assumptions about diet, livestock productivity, ruminant density, and crop productivity. Two of the key production parameters we consider—livestock productivity and stocking density—are affected by a transition to healthier diets but have not been extensively discussed in the agricultural economic modeling literature. Third, we isolate the effects of healthier diets in the U.S. alone, in the rest of the world, and globally, which is especially important given the comparative advantage of U.S. agriculture in global trade.To model multiple policy assumptions across dimensions of food and land use and have full flexibility in terms of parameter assumptions and choice of underlying data sets, we customized a land use accounting model built in Excel, the FABLE Calculator , for the U.S. Below we describe the design of the Calculator, but for more details we direct the reader to the complete model documentation .

The FABLE Calculator represents 76 crop and livestock products using data from the FAOSTAT database. The model first specifies demand for these commodities under selected scenarios , the Calculator computes agricultural production and other metrics, land use change, food consumption, trade, GHG emissions, water use, and land for biodiversity. The key advantages of the Calculator include its speed, the number and diversity of scenario design elements , simplicity, and its transparency. However, unlike economic models using optimization techniques, the Calculator does not consider commodity prices in generating the results, does not have any spatial representation, and does not represent different production practices. The following assumptions can be adjusted in the Calculator to create scenarios: GDP, population, diet composition, population activity level, food waste, imports, exports, livestock productivity, crop productivity, agricultural land expansion or contraction, reforestation, climate impacts on crop production, protected areas, post-harvest losses, bio-fuels. Scenario assumptions in the Calculator rely on “shifters” or time-step-specific relative changes that are applied to an initial historic value using a user-specified implementation rate. The Calculator performs a model run through a sequence of steps or calculations, as follows: calculate human demand for each commodity; calculate livestock production; calculate crop production; calculate pasture and cropland requirements; compare the land use requirements with the available land accounting for restrictions imposed and reforestation targets; calculate the amount of feasible pasture and cropland; and calculate the feasible crop and livestock production; calculate feasible human demand; calculate indicators . See Figure S1 in the Supplementary Materials for a diagram of these steps. Using U.S. national data sources, we modified or replaced the US FABLE Calculator’s default data inputs and growth assumptions based on Food and Agriculture Organization data.

Specifically, we used crop and livestock productivity assumptions from the U.S. Department of Agriculture , grazing/stock intensity using literature from U.S. studies, miscanthus and switch grass bio-energy feed stock productivity assumptions from the Billion Ton study , updated beef and other commodity exports using USDA data, and created a “Healthy Style Diet for Americans” diet using the 2015–2020 USDA Dietary Guidelines for Americans . See SM Table S6 for all other US Calculator data and assumptions. We used these U.S.-specific data updates to construct U.S. diet, yield, and livestock scenarios and sensitivities . See for a full description of the other assumptions and data sources used in the default version of the FABLE Calculator.As a complement to the FABLE Calculator’s exogenously determined trade flows, we used GLOBIOM [a widely used and well-documented global spatially explicit partial equilibrium model of the forestry and agricultural sectors. Documentation can be found at the GLOBIOM github development site to capture the dynamics of endogenously determined international trade. Unlike the FABLE Calculator, GLOBIOM is a spatial equilibrium economic optimization model based on calibrated demand and supply curves as typically employed in economic models. GLOBIOM represents 37 economic production regions, with regional consumers optimizing consumption based on relative output prices, income, and preferences. The model maximizes the sum of consumer and producer surplus by solving for market equilibrium and using the spatial equilibrium modeling approach described in McCarl and Spreen and Takayama and Judge . Product-specific demand curves and growth rates over time allow for selective analysis of preference or dietary change through augmenting demand shift parameters over time to reflect differences in relative demand for specific commodities . Production possibilities in GLOBIOM apply spatially explicit information aggregated to Simulation Units, which are aggregates of 5 pixels of the same altitude, slope, and soil class, within the same 30 arcmin pixel, and within the same country. Land use, production and prices are calibrated to FAOSTAT from the 2000 historic period. Production systems parameters and emissions coefficients for specific crop and livestock technologies are based on detailed biophysical process models,grow bucket including EPIC for crops and RUMINANT for livestock . Livestock and crop productivity changes are reflected by both endogenous and exogenous components. For crop production, GLOBIOM yields can be shifted exogenously to reflect technological or environmental change assumptions and their associated impact on yields. Exogenous yield changes are accompanied by changes in input use intensity and costs .A similar approach has been applied in other U.S.-centric land sector models, including the intertemporal approach outlined in Wade et al. . Furthermore, reflecting potential yield growth with input intensification per unit area is consistent with observed intensification of some inputs in the U.S. agricultural system. This includes nitrogen fertilizer intensity , which grew approximately 0.4% per year from 1988 to 2018 .

Higher prices can induce production system intensification or crop mix shifts across regions to exploit regional comparative advantages. GLOBIOM accounts for several different crop management techniques, including subsistence-level , low input, high input, and high input irrigated systems. The model simulates spatiotemporal allocation of production patterns and bilateral trade fows for key agriculture and forest commodities. Regional trade patterns can shift depending on changes in market or policy factors that Baker et al. and Janssens et al. explore in greater detail in addition to providing a more comprehensive documentation of the GLOBIOM approach to international trade dynamics, including cost structures and drivers of trade expansion or contraction, or establishing new bilateral trade flows. This approach allows for flexibility in trade adjustments at both the intensive and extensive margins given a policy or productivity change in a given region. GLOBIOM has been applied extensively to a wide range of relevant topics, including climate impacts assessment , mitigation policy analysis , diet transitions , and sustainable development goals . We designed new U.S. and rest-of-the world diet and yield scenarios , and ran all scenarios at medium resolution for the U.S. and coarse resolution for ROW. We chose Shared Socioeconomic Pathway 2 macroeconomic and population growth assumptions for all parameters across all scenarios when not specified or overridden by scenario assumptions .We aligned multiple assumptions in the FABLE Calculator with GLOBIOM inputs and/or outputs to isolate the impacts of specific parameter changes in livestock productivity and ruminant density. Specifically, we used the same set of U.S. healthy diet shifters in both models, but aligned the US FABLE Calculator’s crop yields and trade assumptions with GLOBIOM outputs to isolate the effects of increasing the ruminant livestock productivity growth rate and reducing the ruminant grazing density using the Calculator . While we developed high and baseline crop yield inputs for GLOBIOM, actual yields are reported because of the endogenous nature of yields in GLOBIOM. This two model approach allows us to explore the impact of exogenous changes to the livestock sector that cannot be fully exogenous in GLOBIOM. Subsequent methods sections describe each of these scenarios and sensitivity inputs in greater detail.We constructed a “Healthy U.S. diet” using the “Healthy U.S.-style Eating Pattern” from the USDA and US Department of Health and Human Services’ 2015–2020 Dietary Guidelines for Americans . We use a 2600 kcal average diet. This is a reduction of about 300 kcal from the current average U.S. diet given that the current diet is well over the Minimum Dietary Energy Recommendations of 2075 kcal, computed as a weighted average of energy requirement per sex, age, and activity level and the population projections by sex and age class following the FAO methodology . The DGA recommends quantities of aggregate and specific food groups in units of ounces and cup-equivalents on a daily or weekly basis. We chose representative foods in each grouping to convert volume or mass recommendations into kcal/day equivalents and assigned groupings and foods to their closest equivalent US Calculator product grouping . For DGA food groups that consist of more than one US Calculator product group, e.g., “Meats, poultry, eggs”, we used the proportion of each product group in the baseline American diet expressed in kcal/day and applied it to the aggregated kcal from the DGA to get the recommended DGA kcal for each product group . We made one manual modification to this process by increasing the DGA recommendation for beef from a calculated value of 36 kcal/day to 50 kcal/day, since trends in the last decade have shown per capita beef consumption exceeding that of pork . This process led to a total daily intake of 2576 kcal for the healthy U.S. diet . The Baseline, average U.S. diet is modeled in the US FABLE Calculator using FAO reported values on livestock and crop production by commodity in weight for use as food in the U.S., applying the share of each commodity that is wasted, then allocating weight of each commodity to specific food product groups , converting weight to kcal, and finally dividing by the total population and days in a year to get per capita kcal/day. See the Calculator for more details and commodity specific assumptions . This healthy U.S. diet expressed in kcal was used directly in the Calculator as a basis for human consumption demand calculations for specific crop and livestock commodities.