The levels and numbers of supervisors varied by institution and by clinical division

An unsuccessful account can provide valuable lessons, as can a tale of success. Whether the research provides understanding of successful implementation or catastrophic failure, a strong analytical exposition of the details germane to the outcome provides opportunities to learn and establish best practices. This research seeks to understand what is required for successful implementation of a program in a public setting undergoing substantial organizational change. Chapter 2 assessed the program implementation in terms of changes in the program to fit the environment. It in essence focused on the technical details of the implementation—the most difficult aspect of implementation, suggest Pressman and Wildasky and Hupe . This chapter will provide insight into what is suggested here as the other major technical detail of implementation: the managerial capacity to perform. The capacity to change is important to the implementation literature at two primary levels: managerial and organizational. The organizational context represents a macro viewpoint, whereas the managerial level is a more micro, detailed look at an entity’s capabilities. These studies review adaptation to internal or external changes with respect to the enterprise level’s resources or ability to flex and perform around the changes. Klarner, Probst, and Soparnot examined the World Health Organization and its organizational level change capacity. Theirs was a unique look into a public-sector example of change capacity because it provided a look at the organizational context,vertical plant tower the change process, and how the organization mobilized around lessons learned from change experiences. The authors concluded that analysis of an organization’s capacity for change better equips that entity to deal with planned change, thereby increasing chances for success .

Building this type of capacity generally requires a focus on three organizational processes: a learning-based culture, general support for change activities across the enterprise, and the change effort itself . An organization’s capacity for change is a direct function of available resources and its managerial adaptability. The managerial capacity for change relates to the ability of the administrative layer to perform and produce successful outcomes. Managerial capacity is the focus of this chapter because it helps to explain the overarching question guiding this work: how can public-sector management overcome institutional-level forces and implement a complex program successfully?When implementing major reform efforts around health care delivery in the public sector, a manager’s capability to act represents a lever for success . A more detailed treatment of what managerial capacity means in this dissertation’s framework is presented later in this chapter. Within the literature, however, a general association is drawn between managerial capacity and administrative flexibility, autonomy, and choice in actions . This association supports the argument that health care reform efforts, such as the one studied in this paper, require an administrative layer that is able to act in a manner not typically associated with a staid, bureaucratic internal environment. Correctional organizations are very bureaucratic , and the California prison system is no exception to this . This became a significant roadblock for the managers in the receivership under examination. An interesting organizational feature of the California prison health system is that it employs its clinical staff and owns its primary care facilities. This model of owning resources rather than contracting out has implications for the nature of management behavior within the organization, and for how intervention programs can be planned and executed within this setting. For prisons located in geographical areas where it is difficult to recruit clinical professionals, CDCR contracts for outside specialty services and acute care on a fee-for-service basis .

As both a purchaser and provider of health care services, the state’s prisons system has complex organizational processes that required the coordination of activities and multiple types of personnel. Prior to the receivership, the breakdown and lack of attention to the coordination of health care activities in the correctional setting led to a degradation of services and negative health outcomes for prisoners. Managers lacked administrative flexibility in their actions and additionally lacked the ability to staff positions over the long term in geographically undesirable areas. The managers in this setting were assigned the task of implementing a series of projects that were distinct parts of a central program of health care delivery reform. Implementation programs themselves serve as the change vehicles for organizations in that they adapt to situations or environmental challenges. The catalysts for change were discussed in the previous chapter, and these catalysts are the starting points for reform. The vision for change is then memorialized as tasks within a project plan, and typically it is the aggregation of related projects that constitute a program. Put another way, a program to be implemented may be dissected into its distinct parts, which are called projects. This chapter seeks to provide an understanding of how managerial capacity is controlled by organizational structure that is guided by project-level structure. The previous chapter used program-level analysis to focus on implementation theory. It provided a methodology that relied on developing program elements in a way that integrates with prevailing institutionalized processes. The underlying theory was that this type of approach would lead to successful program implementation. It relied on looking at program-level variables rather than at the organizational level of analysis. This chapter continues the theme of focusing on program-level variables, this time looking at management and their behavior. It provides an understanding of what managers can be taught to focus on during change inducing processes.

Adapting models having their origin in the not-for-profit sector is not typically done in the public, correctional setting. The application of private-sector tools is a more familiar strategy, and even these require significant adaptation to maximize their understanding of a given situation . Diffusion of innovation tends to be a more successful method for applying private-sector operational strategies within the public agency setting . Similar large-scale attempts at adopting non-public-sector program models for deployment within the vastly different structure of public works have resulted in failure . The public sector is characterized as having a highly bureaucratic organizational structure, being inflexible to change, and behaving in an extremely routinized manner . Within the private-sector setting there exist a different set of rules, structure of accountability, and goals to achieve, as compared with the traditional public sector. As such, difficulty in adapting innovations established in one sector to another is expected to occur. Both the internal and external environments were diametrically opposed to the studied health care reform program,indoor vertical farming and only an external regulator insisted on its use and success. Adding to the complexity of program and environment were the challenges related to the strategy and technical details of the implementation. Previously, those details had only been addressed in private-sector settings, and therefore the nuances related to the public sector were not known. Outside the private sector, issues related to government-level political support is an important factor for public managers, especially under reform programs . The extent to which administrators perceive support has a significant influence on managerial and employee behavior . These external concerns differ significantly from those typically faced in nonpublic sectors. Due to changes in political administrations, many decisions faced by public-sector managers related to the organizational structure are questioned, in order to maintain or bolster performance . The comparative difference between how often, or the degree to which, these environmental challenges make administration difficult between sectors is not well described in the literature. What is clear, however, is that differences in routine administrative life exist between sectors, just as the types of obstacles faced tend to differ . Program implementations that require the establishment of collaborative, cross functional work groups develop their own policies and rules to guide individual and administrative behavior. These rules are defined within project-group-level cultures that form to define the norms of behavior, enabling the groups to work efficiently . According to Schein , this is expressed through development of proprietary languages and parameters of acceptable group behavior. The internal environment of the projects established by the receivership was not exempt from the development of new cultures within the various project groups. The agency under receivership, CDCR, had its own highly institutionalized processes and well-defined set of cultures that had been established at agency inception and evolved over decades. Its structure and operational framework defined both the ends and means under which administrative actions were determined and undertaken .The receivership organization was a much younger entity, with staff at both the management and worker levels less cohesive and culturally structured than CDCR. As a whole, the staff from CDCR was longer tenured within that agency and therefore had well defined social network channels and routinized behavior—in sharp contrast to the newly established Receivership organization.

The programs the receivership implemented, and specifically the CCM program, involved both CDCR and the receivership entities in terms of personnel, resources, time, and communications. This required integration involved establishing cross-functional teams from both organizations to carry out the work. Studying the administrative behavior, then, of both entities at the organizational level, as suggested by the institutional-school approach, may be overly complex and likely entirely inaccurate. Workers had their home organizations in either CDCR or the receivership . Managers were connected across the receivership enterprise due to program-level work that integrated departments. These managers had their performance evaluated at the program level, not at the organizational level. This meant that accolades or retribution were the outcome of performance of the manager’s unit for each project within which they were involved. Their performance was tightly integrated with the output and deliverables produced by sister departments to which they were tied on a particular project. The headquarters structure and its nature of accountability differed from management at the prison-facility level. Within the prisons, managers were evaluated based on their areas’ performance, not overall statewide performance. Whereas successful delivery of project tasks was the headquarters’ focus, inmate-patient health care outcomes and the passing of regulatory audits were the prison manager’s focus. The receiver-level projects were designed to ultimately lead up to the improvement of inmate-patient health and regulatory audit results. Indirectly, the managers at both levels had their missions tied together, but they were separated by a temporal gap. This difference in focus changes the method by which we can understand administrative behavior at both organizations and how work was viewed and approached by these managers. The managers participating in this study primarily held clinical professional managerial posts, such as chief of pharmacy, because they were clinical specialists. Within the receivership, the administrators were bureaucrats and had statewide responsibility, holding the highest-level positions in the department. Job titles for the highest-level administrators within CPHCS often mirrored the titles for their direct reports at the prison level. For example, the highest administrator of the nursing division within CPHCS was titled the statewide chief nurse executive. Each prison also had a classification for its head of nursing, entitled a chief nurse executive . The use of the designation statewide showed the difference in authority level and represented the matrixed or indirect relationship of the CPHCS administrator to CDCR highest-level manager. The prison-level clinicians at the non-management level in the departments of nursing, mental health, dental, pharmacy, medical, and ancillary services all reported to a chief- or director-level individual within the prison. For example, a staff psychiatrist reported to a chief psychiatrist . Below the chief/director level was an intermediate layer of supervisor staff. Nursing and mental health, for example, required far more labor resources than did dental, and therefore the levels of supervisors were greater in these former two divisions. For example, an institution may have had 100 nurses on staff and therefore required three levels of supervisory staff. Each clinical area, with the exception of pharmacy, required significant nursing staff, and therefore this division ultimately had the greatest number of staff throughout the prisons. Division of labor in this group was great and, administratively speaking, the layers of supervisory staff that developed over time within CDCR were commensurate with the highly specialized and large workload carried by the division. Staff-level workers were licensed vocational nurses or registered nurses who were managed by supervisor registered nurses . The SRNs had three levels of successive importance: I, II, and III. Each step up in supervisory level within nursing represented a significant advance within the administrative hierarchy, with both salary grade and workload accountability increasing accordingly.

Is there epidemiological evidence that BCG vaccination could be neuroprotective?

Usually, MPTP treatment causes an increase in the number of nigral microglia, which may be due to resident microglia replication, or the influx of bone-marrow-derived cells from the periphery. The increased number of activated microglia in the SNc is thought to contribute to MPTP-induced nigrostriatal system damage . Consistent with those reports, we observed that MPTP-treated mice displayed a greater than 2-fold increase in the number of Iba1+ cells in their SNc. In contrast, mice that were treated with BCG prior to MPTP treatment had a similar number of SNC Iba1+ cells to that in saline-treated control mice. In addition, in BCG treated mice, the nigra microglia had small cell bodies and long ramified processes, indicating a resting state. Such microglia are thought to exert neurosupportive functions by their abilities to produce neurotrophins and eliminate excitotoxins. These observations parallel previous assessments of microglia in MPTP-treated mice that received an adoptive transfer of spleen cells from CopaxoneH/CFA vaccinated mice. However, our results show that peripheral BCG-induced immune responses are sufficient to almost completely inhibit the MPTP-induced increase in activated microglia number in the SNc. Conceivably, by circumventing the MPTP-induced increase in activated microglia and the accompanying proinflammatory milieu, the surviving dopaminergic neurons were better able to recover function in BCG-treated mice. Further studies will be necessary to establish how the marked alterations in microglia morphology and activation affect long-term nigrostriatal dopamine system integrity. Proposed mechanisms for neuroprotective vaccines have been contradictory in regard to whether Th1, Th2, Th3 and/or Treg cells play beneficial or pathogenic roles. Some of these differences may be due to the different disease models studied.

Focusing on studies of immune-mediated protection in the MPTP mouse PD model,vertical farm system recent studies have pointed to CD4+ T cells as playing a key role in neurodegeneration. Th17 cells recognizing nitrated a-synuclein can exacerbate MPTP-induced neuronal cell loss, but can be held in check by Tregs . The immune responses elicited by CFA and BCG have been extensively studied and they are both potent inducers of IFNc-secreting Th1-type CD4+ T cells and activators of antigen presenting cells.IFNc is known to antagonize the development of Th17 cells and can induce apoptosis of self-reactive T cells. Additionally, BCG or Mycobacterium tuberculosis infection induces Tregs that proliferate and accumulate at sites of infection, which contribute to limiting inflammatory responses and tissue damage during infection. Accordingly, the Th1 and Treg responses may have suppressed the priming and expansion of Teffector cells following MPTP treatment. It is possible that the robust T cell responses to BCG also created greater T cell competition for APC that reduced the priming of Teffector cells in the periphery. Another possible protective mechanism is that the active BCG infection in the periphery diverted Teffectors, macrophages and BMDC microglia precursors from entering the CNS after MPTP treatment. Previous studies in the experimental autoimmune encephalomyelitis model have shown that infection with BCG 6 weeks before the induction of EAE diverts activated myelin-reactive CD4+ T cells from the CNS to granulomas in the spleen and liver. This diversion was not due to cross-reactivity between BCG antigens and encephalitogenic proteins. Evidently, the peripheral inflammatory lesions non-specifically attracted Teffectors that blunted the development of EAE. Interestingly, in clinical trials, MS patients immunized with BCG had a 57% reduction of lesions as measured by MRI. Thus, there is some clinical evidence that BCG treatment can suppress a neurodegenerative autoimmune response.

Based on our observations that MPTP did not increase microglia number and that microglia were in a resting state in the nigra of BCG-vaccinated mice, it is possible that the BCG treatment circumvented the activation and replication of resident microglia, diverted macrophage or BMDC microglia precursors from entering the CNS, and/or induced some efflux of macrophage-type cells to the periphery. Another possible protective mechanism is that the long period during which the attenuated BCG slowly replicates in the host causes a long-term increase in the levels of circulating immune factors , many of which can enter the CNS. These immune factors may have limited microglia activation and proliferation, the influx of peripheral macrophages or microglia precursor cells, or had a supportive effect on neurons in the area of injury. Further testing is required to distinguish among these possibilities. There are additional lines of evidence that peripheral immune responses can modulate the CNS milieu. Many studies have shown that treatment of pregnant rodents with immuno-stimulants such as lipopolysaccharide, polycytidylic acid, turpentine or viral infection, cause the offspring to have behavioral abnormalities . It is thought that the maternal immune responses to these treatments can alter neurodevelopment in the fetus. These studies provide further evidence that peripheral immune responses can modulate the CNS milieu independently of CNS-reactive T cells. While the exact mechanisms of BCG neuroprotection in the MPTP mouse model remain to be elucidated, our results suggest that peripheral BCG-induced immune responses can exert neuroprotective effects independent of CNS antigen-specify. This represents a paradigm shift from the current notion that neuroprotective vaccines work by inducing protective T cell autoimmunity that acts locally in damaged areas in the CNS. It will be of interest to transfuse GFP-marked BMDC and T cells into mice prior to BCG and MPTP treatments in order to further study BCG’s protective mechanisms.

BCG vaccinations were discontinued in the USA in the 1950s largely because of the low incidence of TB and the vaccine’s incomplete protection. However, BCG vaccination is still given to infants and children in many countries. Adults who were BCG vaccinated as children have little/no protection from TB. Because BCG vaccine effects have greatly diminished by middle age, we would not expect to find a relationship between childhood BCG vaccination and PD incidence. Moreover, the BCG vaccine-mediated protection from TB relies on a small population of memory T cells that is quiescent and that only expands after re-exposure to TB. Since PD patients are not normally exposed to active TB, their few BCG-reactive memory T cells should be quiescent and would not be a source of neuroprotective factors. While neuroprotective vaccines cannot correct basic intrinsic neuronal deficits,vertical indoor farming they may alter the CNS environment to be more neurosupportive so that neurodegeneration and secondary damage to neurons progresses at a slower rate. Conceivably, BCG induced neuroprotective immune responses will be more beneficial in a slowly progressing disease, as in human PD, than in the acutely neurotoxic MPTP model we have studied. In summary, our data show that BCG vaccination, which is safe for human use, can preserve striatal dopaminergic markers. This strongly supports the notion that peripheral immune responses can be benifical in neuropathological conditions. Second generation recombinant BCG vaccines, which have greater immunogenicity and are expected to elicit enhanced immunity against TB, are now being tested in clinical trials. Some new recombinant BCG strains express a human cytokine to boost desired immune responses. It will be of interest to test whether different recombinant BCG strains can enhance the vaccination’s neuroprotective effects. Further studies of how peripheral immune system responses can modulate neurons and glia in the CNS may provide new therapeutic strategies to safely slow neurodegenerative disease processes.This study focuses on the California Department of Corrections and Rehabilitation , which is responsible for the care and custody of incarcerated individuals in the state of California who have been sentenced to terms greater than one year. Individuals sentenced to terms less than a year, or those awaiting sentence, are under the care of different entities: the county or regional jails. In contrast to the state’s other custodial systems, CDCR is distinguished by its long-term focus on the care and control of individuals. This impacts the development of policy that promotes structural permanence. A policy focus of this type presents a significant obstacle for change management and process management, two key elements required for program-implementation success that was required within CDCR. The agency was brought under federal receivership to improve health care outcomes for prisoners; a change that enabled the federal courts to demand the implementation of health programs aimed at improving health care outcomes.

The prison system in California consists of 33 separate facilities serving over 175,000 inmates in a system designed for no more than 100,000 inmates. Due to the “three-strikes rule,” a law that requires third-time felons be sentenced to prison terms, many of the state’s 33 correctional facilities were operating at over 200% of designed capacity. Additionally, by the end of calendar year 2008, the average age of prisoners was 37 years old. This represents an increase of 37% in the average age over a 28-year time span: in 1980 the average age of the incarcerated was 27. Overcrowding in the system, combined with upwardly spiraling costs, led to organizational failure. Inmates typically have more health issues than do those in the non-incarcerated population. An examination of de-identified CDCR data reveals that approximately 70% of the inmate population was taking at least one medication in the year 2009. The average for the U.S. population is closer to 47% . Aging inmates cost two to three times as much to incarcerate as younger prisoners—on average $98,000 to $138,000 a year . When inmates are paroled, they do not receive the same access to health care as they do while imprisoned. In the state of California, inmates had a 63.7 percent three-year recidivism rate as measured in fiscal year 2012 . While under custodial care, health care is free. Individuals reentering the prison system with medical conditions that were not treated while paroled may exhibit exacerbation in their medical conditions. The costs related to the treatment of individuals with more severe conditions are higher than they otherwise would have been if the individuals had received continuous care. In the absence of proactive treatments, and with an aging population in an overcrowded and unsafe environment, the costs associated with health care are likely to continue to rise among these wards of the state. The public health concerns go beyond the cost-of-care issue and are related to high recidivism rates and community health issues, including the spread of communicable diseases such as AIDS . Some of the most prominent failures within the CDCR system were avoidable inmate patient deaths, believed to have resulted from poor systems and controls related to the delivery of health care. A receivership was established as the result of a federal class-action suit, Plata v. Schwarzenegger , under which it was found that CDCR was deficient in providing constitutionally acceptable levels of medical care to prison inmates.Several federal court cases concerning unconstitutional conditions within the system preceded the institution of this receivership . Under Plata v. Schwarzenegger , it was found that, on average, one inmate-patient died every six to seven days as the result of deficiencies in the state prison’s health care system. The receiver was given all powers vested by law in the Secretary of the California Department of Corrections and Rehabilitation, including the administration, control, management, operation, and financing of the California prisons’ medical health care system. Thus the court placed full accountability for inmate health care in the hands of the receiver , giving the ability and responsibility to change the system according to court requirements. The receiver recruited a diverse team of industry experts consisting of medical, nursing, clinical quality, information technology, and facility construction professionals to assist with the prison health care reform efforts.CDCR is presently the second-largest law enforcement department in the nation and is the single largest state-run prison system in the United States . Over the past decade, this corrections agency has grown from the state of California’s third largest employer to the second, behind only the state’s University of California system . For fiscal year 2011, $9.5 billion was budgeted by CDCR in order to supervise and oversee over 300,000 of the state’s criminals found guilty in a court of law . This size and structure relates to the common perception of big-government bureaucracy. Large, bureaucratic organizations are unwieldy and difficult to change. Max Weber pointed out, “once it is fully established, bureaucracy is among those social structures which are the hardest to destroy” . This is true due to bureaucracy’s cohesiveness and discipline, its control of the facts, and its single-minded concentration on the maintenance of power.

Utilizing TLCs may result in greater clinical flexibility and effectiveness and less role strain

The kinase activity of both CK1δ and CK1ε is inhibited by autophosphorylation of an intrinsically disordered inhibitory tail that follows the kinase domain to set these isoforms apart from other members of the CK1 family. Because the full-length kinase autophosphorylates and slowly inactivates itself in vitro, most biochemical studies exploring the activity of CK1δ/ε on clock proteins utilize the truncated, constitutively active protein, although new studies are finally beginning to explore the consequences of autophosphorylation in more detail. However, not much is known yet about how the phosphorylated tail interacts with the kinase domain to inhibit its activity; several autophosphorylation sites were previously identified on CK1ε at S323, T325, T334, T337, S368, S405, S407 and S408 using limited proteolysis and phosphatase treatment or through Ser/Thr to Ala substitutions in vitro, although it is currently not known which of these sites are important for kinase regulation of the clock. One potential interface has been mapped between the kinase domain and autoinhibitory tail through cross linking and mass spectrometry to suggest that the tail might dock some phosphorylated Ser/Thr residues close to the anion binding sites near the active site. This study also provided evidence that the tail may be able to regulate substrate binding, and therefore control specificity of the kinase,vertical plant rack by comparing the activity of CK1α, a tailless kinase, with CK1ε on two substrates, PER2 and Disheveled. Understanding the role of tail autophosphorylation and its regulation of kinase activity is sure to shed light on control of circadian rhythms by CK1δ/ε. Some sites within the C-terminal tail of CK1δ and/or CK1ε are known to be phosphorylated by other kinases, such as AMPK, PKA, Chk1, PKCα, and cyclindependent kinases.

PKA phosphorylates S370 in CK1δ to reduce its kinase activity; consistent with this, mutation of S370 to alanine increases CK1-dependent ectopic dorsal axis formation in Xenopus laevis. Chk1 and PKCα also reduce CK1δ kinase activity through phosphorylation of overlapping sites at S328, T329, S331, S370, and T397 in the tail of rat CK1δ. Phosphorylation of CK1δ T347 influences its activity on PER2 in cells, and was found to be phosphorylated by proline-directed cyclin-dependent kinases rather than autophosphorylation. CDK2 was also found to reduce the activity of rat CK1δ in vitro through phosphorylation of additional sites at T329, S331, T344, S356, S361, and T397. Unlike the other kinases listed here, phosphorylation of S389 on CK1ε by AMPK increases the apparent kinase activity on the PER2 phosphodegron in cells; consequently, activation of AMPK with metformin increased the degradation of PER2. Therefore, the phosphorylation of CK1δ and/or CK1ε tails by these other kinases therefore has the potential to link its regulation of PER2 and the circadian clock to metabolism, DNA damage response, and the cell cycle. There is now strong evidence that the C-terminus of CK1δ plays a direct role in regulation of circadian period. Recently, tissue-specific methylation of CK1δ was shown to regulate alternative splicing of the kinase into two unique isoforms, δ1 and δ2, that differ only by the extreme C-terminal 15 residues. Remarkably, expression of the canonical δ1 isoform decreases PER2 half-life and circadian period, while the slightly shorter δ2 isoform increases PER2 half-life and circadian period.Further biochemical studies revealed that these two variants exhibit differential activity on the stabilizing priming site of the PER2 FASP region––the δ1 isoform has a lower activity than δ2, which also closely resembles the C-terminus of the ε isoform.

These data suggest that a very short region at the C-terminal end of the tail could play a major role in regulation of CK1δ and the PER2 phosphoswitch to control circadian period. This is bolstered by the discovery of a missense mutation in the same region of the CK1ε tail at S408N in humans that has been associated with protection from Delayed Sleep Phase Syndrome and Non-24-hr Sleep-Wake Syndrome. Further studies will help to reveal biochemical mechanisms behind regulation of kinase activity and substrate selectivity by the C-terminal tail of CK1δ and CK1ε to determine how they play into regulation of circadian rhythms. The central thesis of this article is very simple: Health professionals have significantly underestimated the importance of lifestyle for mental health. More specifically, mental health professionals have underestimated the importance of unhealthy lifestyle factors in contributing to multiple psychopathologies, as well as the importance of healthy lifestyles for treating multiple psychopathologies, for fostering psychological and social well-being, and for preserving and optimizing cognitive capacities and neural functions. Greater awareness of lifestyle factors offers major advantages, yet few health professionals are likely to master the multiple burgeoning literatures. This article therefore reviews research on the effects and effectiveness of eight major therapeutic lifestyle changes ; the principles, advantages, and challenges involved in implementing them; the factors hindering their use; and the many implications of contemporary lifestyles for both individuals and society. Lifestyle factors can be potent in determining both physical and mental health. In modern affluent societies, the diseases exacting the greatest mortality and morbidity— such as cardiovascular disorders, obesity, diabetes, and cancer—are now strongly determined by lifestyle. Differences in just four lifestyle factors—smoking, physical activity, alcohol intake, and diet— exert a major impact on mortality, and “even small differences in lifestyle can make a major difference in health status” .

TLCs can be potent. They can ameliorate prostate cancer, reverse coronary arteriosclerosis, and be as effective as psychotherapy or medication for treating some depressive disorders . Consequently, there is growing awareness that contemporary medicine needs to focus on lifestyle changes for primary prevention, for secondary intervention, and to empower patients’ self-management of their own health. Mental health professionals and their patients have much to gain from similar shifts. Yet TLCs are insufficiently appreciated, taught, or utilized. In fact, in some ways, mental health professionals have moved away from effective lifestyle interventions. Economic and institutional pressures are pushing therapists of all persuasions toward briefer, more stylized interventions. Psychiatrists in particular are being pressured to offer less psychotherapy, prescribe more drugs, and focus on 15-minute “med checks,” a pressure that psychologists who obtain prescription privileges will doubtless also face . As a result, patients suffer from inattention to complex psychodynamic and social factors, and therapists can suffer painful cognitive dissonance and role strain when they shortchange patients who need more than what is allowed by mandated brief treatments . A further cost of current therapeutic trends is the underestimation and underutilization of lifestyle treatments despite considerable evidence of their effectiveness. In fact, the need for lifestyle treatments is growing,growing vegetables in vertical pvc pipe because unhealthy behaviors such as overeating and lack of exercise are increasing to such an extent that the World Health Organization warned that “an escalating global epidemic of overweight and obesity— ‘globesity’—is taking over many parts of the world” and exacting enormous medical, psychological, social, and economic costs.Lifestyle changes can offer significant therapeutic advantages for patients, therapists, and societies. First, TLCs can be both effective and cost-effective, and some—such as exercise for depression and the use of fish oils to prevent psychosis in high-risk youth—may be as effective as pharmacotherapy or psychotherapy . TLCs can be used alone or adjunctively and are often accessible and affordable; many can be introduced quickly, sometimes even in the first session . TLCs have few negatives. Unlike both psychotherapy and pharmacotherapy, they are free of stigma and can even confer social benefits and social esteem . In addition, they have fewer side effects and complications than medications .

TLCs offer significant secondary benefits to patients, such as improvements in physical health, self-esteem, and quality of life . Furthermore, some TLCs—for example, exercise, diet, and meditation—may also be neuroprotective and reduce the risk of subsequent age-related cognitive losses and corresponding neural shrinkage . Many TLCs—such as meditation, relaxation, recreation, and time in nature—are enjoyable and may therefore become healthy self-sustaining habits . Many TLCs not only reduce psychopathology but can also enhance health and well-being. For example, meditation can be therapeutic for multiple psychological and psychosomatic disorders . Yet it can also enhance psychological well-being and maturity in normal populations and can be used to cultivate qualities that are of particular value to clinicians, such as calmness, empathy, and self-actualization . Knowledge of TLCs can benefit clinicians in several ways. It will be particularly interesting to see the extent to which clinicians exposed to information about TLCs adopt healthier lifestyles themselves and, if so, how adopting them affects their professional practice, because there is already evidence that therapists with healthy lifestyles are more likely to suggest lifestyle changes to their patients . There are also entrepreneurial opportunities. Clinics are needed that offer systematic lifestyle programs for mental health that are similar to current programs for reversing coronary artery disease . For societies, TLCs may offer significant community and economic advantages. Economic benefits can accrue from reducing the costs of lifestyle-related disorders such as obesity, which alone accounts for over $100 billion in costs in the United States each year . Community benefits can occur both directly through enhanced personal relationships and service and indirectly through social networks. Recent research demonstrates that healthy behaviors and happiness can spread extensively through social networks, even through three degrees of separation to, for example, the friends of one’s friends’ friends . Encouraging TLCs in patients may therefore inspire similar healthy behaviors and greater well-being in their families, friends, and co-workers and thereby have far-reaching multiplier effects . These effects offer novel evidence for the public health benefits of mental health interventions in general and of TLCs in particular. So what lifestyle changes warrant consideration? Considerable research and clinical evidence support the following eight TLCs: exercise, nutrition and diet, time in nature, relationships, recreation, relaxation and stress management, religious and spiritual involvement, and contribution and service to others.Exercise offers physical benefits that extend over multiple body systems. It reduces the risk of multiple disorders, including cancer, and is therapeutic for physical disorders ranging from cardiovascular diseases to diabetes to prostate cancer . Exercise is also, as the Harvard Mental Health Letterconcluded, “a healthful, inexpensive, and insufficiently used treatment for a variety of psychiatric disorders.” As with physical effects, exercise offers both preventive and therapeutic psychological benefits. In terms of prevention, both cross-sectional and prospective studies show that exercise can reduce the risk of depression as well as neurodegenerative disorders such as age-related cognitive decline, Alzheimer’s disease, and Parkinson’s disease . In terms of therapeutic benefits, responsive disorders include depression, anxiety, eating, addictive, and body dysmorphic disorders. Exercise also reduces chronic pain, age-related cognitive decline, the severity of Alzheimer’s disease, and some symptoms of schizophrenia . The most studied disorder in relation to exercise to date is mild to moderate depression. Cross-sectional, prospective, and meta-analytic studies suggest that exercise is both preventive and therapeutic, and in terms of therapeutic benefits it compares favorably with pharmacotherapy and psychotherapy . Both aerobic exercise and nonaerobic weight training are effective for both short-term interventions and long-term maintenance, and there appears to be a dose–response relationship, with higher intensity workouts being more effective. Exercise is a valuable adjunct to pharmacotherapy, and special populations such as postpartum mothers, the elderly, and perhaps children appear to benefit . Possible mediating factors that contribute to these antidepressant effects span physiological, psychological, and neural domains. Proposed physiological mediators include changes in serotonin metabolism, improved sleep, as well as endorphin release and consequent “runner’s high” . Psychological factors include enhanced self-efficacy and self esteem, interruption of negative thoughts and rumination , and perhaps the breakdown of muscular armor, the chronic psychosomatic muscle tension patterns that express emotional conflicts and are a focus of somatic therapies . Neural factors are especially intriguing. Exercise increases brain volume , vascularization, blood flow, and functional measures . Animal studies suggest that exercise-induced changes in the hippocampus include increased neuronogenesis, synaptogenesis, neuronal preservation, interneuronal connections, and BDNF . Given these neural effects, it is not surprising that exercise can also confer significant cognitive benefits . These range from enhancing academic performance in youth, to aiding stroke recovery, to reducing age-related memory loss and the risk of both Alzheimer’s and non-Alzheimer’s dementia in the elderly . Multiple studies show that exercise is a valuable therapy for Alzheimer’s patients that can improve intellectual capacities, social functions, emotional states, and caregiver distress .

The digest was considered semi-specific and up to 3 missed cleavages were allowed

Similar results were observed for EGFR degradation, with no major proteome-wide changes occurring and EGFR being virtually the only proteins significantly down regulated in CXCL12- Ctx treatment compared to control in both the surface-enriched and whole cell proteomics . Interestingly, a previously published proteomics dataset of LYTAC-mediated EGFR degradation identified additional proteins significantly up- or down-regulated following LYTAC treatment.Comparing to our experiment in the same cell line suggests that KineTACs are more selective in degrading EGFR. As there is large overlap in peptide IDs observed between the two datasets, the observed greater selectivity is not due to lack of sensitivity of the KineTAC proteomics experiment . CXCR4 and CXCR7 peptide IDs were not altered in the surface-enriched sample, and CXCR4 IDs were also unchanged in the whole cell sample, indicating that treatment with KineTAC does not significantly impact CXCR4 or CXCR7 levels. Furthermore, protein levels of GRB2 and SHC1, which are known interacting partners of EGFR4 , were also not significantly changed. Together, these data demonstrate the exquisite selectivity of KineTACs for degrading only the target protein and not inducing unwanted, off-target proteome wide changes. To elucidate whether KineTAC-mediated degradation could impart functional cellular consequences, cell viability of HER2 expressing cells was measured following treatment with CXCL12-Tras. MDA-MB-175VII breast cancer cells are reported to be sensitive to trastuzumab treatment, and as such serve as an ideal model to test the functional consequence of degrading HER2 compared to inhibition with trastuzumab IgG.To this end, cells were treated with either CXCL12-Tras or trastuzumab IgG for 5 days,vertical tower planter after which the cell viability was determined using a modified MTT assay. Reduction in cell viability was observed at higher concentrations of CXCL12-Tras and was significantly greater than trastuzumab IgG alone .

These data demonstrate that KineTAC-mediated degradation has functional consequences in reducing cancer cell viability in vitro and highlights that KineTACs could provide advantages over traditional antibody therapeutics which bind but do not degrade. Finally, we asked whether KineTACs would have similar antibody clearance to IgGs in vivo. To this end, male nude mice were injected intravenously with 5, 10, or 15 mg/kg CXCL12- Tras, which is a typical dose range for antibody xenograft studies. Western blotting analysis of plasma antibody levels revealed that the KineTAC remained in plasma up to 10 days post-injection with a half-life of 8.7 days, which is comparable to the reported half-life of IgGs in mice .Given the high homology between human and mouse CXCL12, we tested whether human CXCL12 isotype could be cross-reactive. Human CXCL12 isotype binding to mouse cell lines MC38 and CT26, which endogenously express mouse CXCR7, was confirmed . Together, these results demonstrate that KineTACs have favorable stability and are not being rapidly cleared despite cross-reactivity with mouse CXCR7 receptors. Since atezolizumab is also known to be cross-reactive, CXCL12-Atz ability to degrade mouse PD-L1 was tested in both MC38 and CT26. Indeed, CXCL12-Atz mediates near complete degradation of mouse PD-L1 in both cell lines . Thus, PD-L1 degradation may serve as an ideal mouse model to assay the efficacy of KineTACs in vivo. Having demonstrated the ability of KineTACs to mediate cell surface protein degradation, we next asked whether KineTACs could also be applied towards the degradation of soluble extracellular proteins. Soluble ligands, such as inflammatory cytokines and growth factors, have been recognized as an increasingly important therapeutic class.

Of these, vascular endothelial growth factor and tumor necrosis factor alpha represent the most targeted soluble ligands by antibody and small molecule drug candidates, highlighting their importance in disease.Thus, we chose VEGF and TNFa as ideal proof-of-concept targets to determine whether KineTACs could be expanded to degrading extracellular soluble ligands . First, we targeted VEGF by incorporating bevacizumab , an FDA approved VEGF inhibitor, into the KineTAC scaffold . Next, HeLa cells were incubated with VEGF-647 or VEGF-647 and CXCL12-Beva for 24 hr. Following treatment, flow cytometry analysis showed a robust increase in cellular fluorescence when VEGF-647 was co-incubated with CXCL12-Beva, but not bevacizumab isotype which lacks the CXCL12 arm . To ensure that the increased cellular fluorescence was due to intracellular uptake of VEGF-647 and not surface binding, we determined the effect of an acid wash which removes any cell surface binding after 24 hr incubation . We found that there was no significant difference in cellular fluorescence levels between acid and normal washed cells. This data suggests that KineTACs successfully mediate the intracellular uptake of extracellular VEGF. Similar to membrane protein degradation, KineTAC-mediated uptake of VEGF occurs in a time-dependent manner, with robust internalization occurring before 6 hrs and reaching steady state by 24 hrs . Furthermore, the levels of VEGF uptake are dependent on the KineTAC:ligand ratio and saturate at ratios greater than 1:1 . We next tested the ability of CXCL12-Beva to promote uptake on other cell lines and find that these cells also significantly uptake VEGF . Moreover, the extent of uptake is correlated with the transcript levels of CXCR7 in these cells . These data suggest that KineTACs directed against soluble ligands can promote broad tissue clearance of these targets as compared to glycan- or Fc-mediated clearance mechanisms. To demonstrate the generalizable nature of the KineTAC platform for targeting soluble ligands, we next targeted TNFa by incorporating adalimumab , an FDA approved TNFa inhibitor, into the KineTAC scaffold . Following 24 hr treatment of HeLa cells, significant increase in cellular fluorescence was observed when TNFa-647 was coincubated with CXCL12-Ada compared to adalimumab isotype .

Consistent with the VEGF uptake experiments, acid wash did not alter the level of cellular fluorescence increase observed, and uptake was dependent on the KineTAC:ligand ratio . Thus, KineTACs are generalizable in mediating the intracellular uptake of soluble ligands, significantly expanding the target scope of KineTAC-mediated targeted degradation.In summary, our data suggest that KineTACs are a versatile and modular targeted degradation platform that enable robust lysosomal degradation of both cell surface and extracellular proteins. We find that KineTAC-mediated degradation is driven by recruitment of both CXCR7 and target protein, and that factors such as binding affinity, epitope, and construct design can affect efficiency. Other factors, such as signaling competence and pH dependency for the protein of interest, did not impact degradation for CXCL12 bearing KineTACs. These results provide valuable insights into how to engineer effective KineTACs going forward. Furthermore, we show that KineTACs operate via time-, lysosome-, and CXCR7-dependence and are exquisitely selective in degrading target proteins with minimal off-target effects. Initial experiments with an alternative cytokine, CXCL11, highlight the versatility of the KineTAC platform and the exciting possibility of using various cytokines and cytokine receptors for targeted lysosomal degradation. KineTACs are built from simple genetically encoded parts that are readily accessible from the genome and published human antibody sequences. Given differences in selectivity and target scope that we and others have observed between degradation pathways,lettuce vertical farming there is an ongoing need to co-opt novel receptors for lysosomal degradation, such as CXCR7, that may offer advantages in terms of tissue selectivity or degradation efficiency. Thus, we anticipate ongoing work on the KineTAC platform to offer new insights into which receptors can be hijacked and to greatly expand targeted protein degradation to the extracellular proteome for both therapeutic and research applications.SILAC proteomics data were analyzed using PEAKSOnline . For all samples, searches were performed with a precursor mass error tolerance of 20 ppm and a fragment mass error tolerance of 0.03 Da. For whole cell proteome data, the reviewed SwissProt database for the human proteome was used. For surface enriched samples, a database composed of SwissProt proteins annotated “membrane” but not “nuclear” or “mitochondrial” was used to ensure accurate unique peptide identification for surface proteins, as previously described.Carbamidomethylation of cystine was used as a fixed modification, whereas the isotopic labels for arginine and lysine, acetylation of the N-terminus, oxidation of methionine, and deamidation of asparagine and glutamine were set as variable modifications. Only PSMs and protein groups with an FDR of less than 1% were considered for downstream analysis. SILAC analysis was performed using the forward and reverse samples, and at least 2 labels for the ID and features were required. Proteins showing a >2-fold change from PBS control with a significance of P<0.01 were considered to be significantly changed.Cell viability assays were performed using an MTT modified assay. In brief, on day 0 15,000 MDA-MB-175VII cells were plated in each well of a 96-well plate. On day 1, bispecifics or control antibodies were added in a dilution series. Cells were incubated at 37ºC under 5% CO2 for 5 days. On day 6, 40 µL of 2.5 mg/mL thiazolyl blue tetrazolium bromide was added to each well and incubated at 37ºC under 5% CO2 for 4 hrs. 100 µL of 10% SDS in 0.01M HCl was then added to lyse cells and release MTT product.

After 4 hrs at room temperature, absorbance at 600 nm was quantified using an Infinite M200 PRO plate reader . Data was plotted using GraphPad Prism software and curves were generated using non-linear regression with sigmoidal 4PL parameters. Male nude nu/nu mice were treated with 5, 10, or 15 mg/kg CXCL12-Tras via intravenous injection . Blood was collected from the lateral saphenous vein using EDTA capillary tubes at day 0 prior to intravenous injection and at days 3, 5, 7, and 10 post injection. Plasma was separated after centrifugation at 700xg at 4ºC for 15 min. To determine the levels of CXCL12-Tras, 1 µL of plasma was diluted into 30 µL of NuPAGE LDS sample buffer and loaded onto a 4-12% Bis-Tris gel and ran at 200V for 37 min. The gel was incubated in 20% ethanol for 10 min and transferred onto a polyvinylidene difluoride membrane. The membrane was washed with water followed by incubation for 5 min with REVERT 700 Total Protein Stain . The blot was then washed twice with REVERT 700 Wash Solution and imaged using an OdysseyCLxImager . The membrane was then blocked in PBS with 0.1% Tween-20 + 5% bovine serum albumin for 30 min at room temperature with gentle shaking. Membranes were incubated overnight with 800 CW goat anti-human IgG at 4ºC with gentle shaking in PBS + 0.2% Tween- 20 + 5% BSA. Membranes were washed four times with tris-buffered saline + 0.1% Tween-20 and then washed with PBS. Membranes were imaged using an OdysseyCLxImager . Band intensities were quantified using Image Studio Software .The concept of targeted degradation has emerged in the last two decades as an attractive alternative to conventional inhibition. Small molecule inhibitors primarily work through occupancy-driven pharmacology, resulting in temporary inhibition in which the therapeutic effect is largely dependent on high potency. On the other hand, PROteolysis TArgeting Chimeras utilize event-driven pharmacology to degrade proteins in a catalytic manner.Traditionally, PROTACs are heterobifunctional small molecules composed of a ligand binding a protein of interest chemically linked to a ligand binding an E3 ligase. The recruitment of an E3 ligase enables the transfer of ubiquitin onto the protein of interest, which is subsequently polyubiquitinated and recognized by the proteasome for degradation . In many cases, PROTACs have proven efficacious over the small molecule inhibitors alone, and several candidate PROTACs have progressed to clinical trials for treating human cancers and other diseases. Despite these successes, small molecule PROTACs are largely limited to targeting intracellular proteins. Given this challenge, there is a need for novel technologies that expand the scope of targeted degradation to membrane proteins. Recently, our lab has developed a method termed antibody-based PROTACs which utilize bispecific antibody scaffolds to bring membrane-bound E3 ligases in proximity to a membrane protein of interest for targeted degradation.Thus far, AbTACs have shown success in using bispecific IgGs to recruit E3 ligase RNF4 to programmed death ligand 1 for efficient lysosomal degradation. This data suggests that it is possible to use bispecific antibodies to degrade membrane proteins for which antibodies already exist or that have characteristics amenable to recombinant antibody selection strategies.However, the ability to degrade multipass membrane proteins, such as GPCRs, remains challenging due to few extracellular-binding antibodies existing for this target class. Here, we describe a novel approach to expand the scope of AbTACs to targeting multi-pass membrane proteins. This approach, termed antibody-drug conjugate PROTACs , comprises of an antibody targeting a cell surface E3 ligase chemically conjugated to a small molecule that specifically binds the protein of interest .

VLE identified focal areas of concern in 77% of BE procedures

All patients underwent standard of care endoscopy including WLE in accordance with their institution’s standard procedures followed by VLE examination. Sample VLE features relevant to normal and abnormal structures in the esophagus were used as a general guideline to interpret VLE images in the study .Investigators were trained on the use of the technology and supported as needed onsite and offsite by technical experts from the sponsor throughout the study. VLE scans were registered longitudinally and rotationally with the WLE image of the esophagus. When a lesion was identified on VLE,the investigator would triangulate the location of the lesion by recording the distance and clock face registered with the WLE orientation. This information then was used to guide the investigator to acquire the tissue using WLE. At the time of the study, this was the method that was available to target a tissue site for sampling. Additional procedure details can be found in Supplementary Material A. Following VLE, each investigator performed any desired diagnostic or therapeutic actions based on their standard of care according to WLE and advanced imaging findings. Highest grade of disease on the pathology results was recorded for advanced imaging guided tissue acquisition, targeted endoscopic tissue acquisition, and random biopsies. VLE guided tissue acquisition refers to the subgroup of advanced imaging guided tissue biopsy or resection specimens where only VLE imaging was used to identify the areas of interest. Investigators were given a questionnaire post procedure and data were collected as to the clinical workflow and utility of the VLE images. The questions included whether VLE guided either their tissue sampling or therapeutic decisions for each patient,vertical indoor hydroponic system and whether VLE identified suspicious areas not seen on WLE or other advanced imaging modalities.

Descriptive statistics were used for quantitative analyses in the study. In light of the vast majority of registry patients having suspected or confirmed BE, the investigators elected to focus initial analysis on this group and to assess potential roles of VLE in BE management. Suspected BE refers to patients with no prior histologic confirmation of BE who had salmon colored mucosa found on endoscopic examination with WLE. The analysis focused on the incremental diagnostic yield improvement of VLE as an adjunct modality on top of the standard of care practice. Procedures with confirmed neoplasia were included in the analysis. The procedures were divided into subgroups according to whether the tissue acquisition method was VLE targeted. Dysplasia diagnostic yields were calculated using the number of procedures in each subgroup and total number of procedures in patients with previously diagnosed or suspected BE. Negative predictive value analysis in patients with prior BE treatment evaluated the utility of VLE on top of the standard of care surveillance to predict when there is no dysplasia present. Procedures with negative endoscopy findings and negative VLE findings but with tissue acquisition performed were included in the analysis and NPVs for both SoC and SoC + VLE were calculated. The primary evaluation focused on HGD and cancer since the recommended image interpretation criteria were validated for detecting BE-related neoplasia,and treatment is recommended for patients with neoplasia per existing guidelines.From August 2014 through April 2016, 1000 patients were enrolled across 18 trial sites . The majority of patients were male , with a mean age of 64 years . A total of 894 patients had suspected or confirmed BE at the time of enrollment including 103 patients with suspected BE and 791 patients with prior histological confirmation. Of the confirmed BE patients, 368 had BE with neoplasia, 170 had BE with low grade dysplasia , 49 had BE indefinite for dysplasia , and 204 had nondysplastic BE .

A total of 56% of patients had undergone prior endoscopic or surgical interventions for BE including RFA, Cryo, and EMR . Post-procedure questionnaires were completed for all procedures in patients with previously diagnosed or suspected BE . In over half of the procedures, investigators identified areas of concern not seen on either WLE or other advanced imaging modalities. Both VLE and endoscopic BE treatment were performed in 352 procedures. VLE guided the intervention in 52% of these procedures. In 40% of procedures, the depth or extent of disease identified on VLE aided the selection of a treatment modality. Neoplasia was confirmed on tissue sampling performed in 76 procedures within the cohort of patients with previously diagnosed or suspected BE . Among these procedures, VLE-guided tissue acquisition alone found neoplasia in 26 procedures , with an additional case where HGD on random forceps biopsy was upstaged to IMC on VLE-targeted sampling. Histology from these procedures included 16 HGD, 5 IMC, and 6 EAC. Thus, VLE-guided tissue acquisition as an adjunct to standard practice detected neoplasia in an additional 3% of the entire cohort of patients with previously diagnosed or suspected BE, and improved the diagnostic yield by at least 55% . Of the 894 BE patients, 393 had no prior history of esophageal therapy. Mean Prague classification score for this cohort were C = 2.3 cm , M = 4.1 cm . In 199 of these treatment na¨ıve patients, VLE identified at least one focally suspicious area not appreciated during either WLE or other advanced imaging evaluation. Neoplasia was confirmed on histology in 24 procedures . In of these procedures, VLE alone identified neoplasia as all random biopsies for these patients were negative. Additionally, one casewhere HGD was found on random forceps biopsy was upstaged to IMC on VLE-targeted sampling. In this group, VLE-guided tissue acquisition increased neoplasia detection by 700% . For these untreated BE patients, VLE-guided tissue acquisition as an adjunct to standard practice detected neoplasia in an additional 5.3% of procedures .

The number needed to test with VLE to identify neoplasia not detected with standard of care technique was 18.7. An average of 1.7 additional sites per patient required targeted tissue acquisition when suspected regions were identified using VLE compared to an average of 11 random biopsies per patient. A sub-analysis was conducted in the 238 patients with prior BE treatment and either no visible BE or irregular z-line. From this group, 82% had no focally suspicious findings on WLE examination, where two procedures were subsequently diagnosed with neoplasia . Thus, the NPV for WLE was 99% for neoplasia. When combining WLE/NBI with VLE as an adjunct, we found that 49% of the post-treatment procedures had no suspicious WLE or VLE findings. Neoplasia was found in none of these procedures, corresponding to a negative predictive value of 100% .Advanced imaging techniques including high definition-WLE, NBI, CLE, and chromoendoscopy have continued to improve the evaluation of Barrett’s esophagus. However, these provide only superficial epithelial evaluation. VLE breaks this boundary by imaging the mucosa, submucosa, and frequently,vertical farming tower for sale down to the muscular is propria. It does so while evaluating a large tissue area in a short period of time without sacrificing resolution. This 1000-patient multi-center registry assessed the clinical utility of VLE for the management of esophageal disorders and has demonstrated its potential as an adjunct tool for detecting disease. Abnormalities were found on VLE which were not seen with other imaging in over half of the procedures. Endoscopists using VLE in this study felt that it guided tissue acquisition in over 70% of procedures and BE treatment in the majority of procedures where interventions were performed. VLE visualization of subsurface tissue structures allows comprehensive morphological evaluation, resulting in physicians reporting suspicious areas only seen on VLE when other advanced imaging modalities were also used in more than half of procedures. Although subjective, these results still provide useful insight into the physicians’ perception of the technology. This study found that VLE as an adjunct modality increased neoplasia diagnosis by 3%, and improved the neoplasia diagnostic yield by 55% over standard practice and other advanced imaging modalities. For a treatment na¨ıve population with no focally suspicious regions found on WLE, VLE-guided tissue acquisition improved neoplastic diagnostic yield by 700%. This finding is impressive, particularly as these procedures were performed prior to the release of a real time laser marking system.

Laser marking has since been evaluated by Alshelleh et al., who found a statistically significant improvement of neoplasia yield using the VLE laser marking system compared to the standard Seattle protocol.In this registry, an additional 2.3 sites per patient on average required guided biopsy or resection when suspected regions were identified using VLE, while an average of 15.8 random biopsies per patient were performed in the cohort of patients with previously diagnosed or suspected BE . In general, higher tissue sampling density leads to an increased chance of detecting dysplasia due to its focal nature, therefore taking additional biopsies should increase the diagnostic yield. However, the potential for advanced imaging such as VLE to provide targeted, high yield biopsies could reduce the total number of biopsies necessary to adequately evaluate the diseased mucosa with the Seattle protocol. The combination of a focally unremarkable WLE and VLE examination provided a negative predictive value of 100% for neoplasia in post-treatment population. Although not reaching statistical significance due to limited sample size, these early results provide promise for the utility of VLE to better predict when there is no disease present, i.e. a ‘clean scan.’ Such a tool could then potentially allow for extended surveillance intervals reducing the number of endoscopies to manage the patient’s needs. The utility of this analysis is subject to several limitations. As a post-market registry study, there was no defined protocol for imaging, image interpretation and tissue acquisition, and there was no control group for matched population comparisons. The early experience of users on VLE image interpretation may have resulted in over calling areas of concern. Abnormalities located deeper in the esophageal wall could be targeted with forceps biopsies at one site, while other sites would utilize endoscopic resection techniques that are more likely to remove the target. All of these discrepancies could affect any calculations regarding the adjunctive yield of VLE-targeted sampling. Further analysis of the global detection rate of dysplasia by site did not reveal any statistical difference. At the time of this study, image interpretation was performed using previously published guidelines for detection of neoplasia in Barrett’s esophagus with OCT.Challenges with histopathological diagnosis of LGD limited the development of VLE criteria for LGD. As such, the analyses in this study focused on neoplasia. Current guidelines suggest that treatment of LGD is acceptable so detection of LGD with VLE should be addressed in a future study. Additionally, the characteristic image features that maximize sensitivity and specificity of confirmatory biopsies must be optimized. Recently, Leggett et al. established an updated step-wise diagnostic algorithm to detect dysplasia based on similar VLE features used in this study.This diagnostic algorithm achieved 86% sensitivity, 88% specificity, and 87% diagnostic accuracy to detect BE dysplasia with almost perfect interobserver agreement among three raters .Further optimization of VLEimage features for identifying dysplasia and neoplasia are ongoing . Other limitations of the study include the lack of central pathology for interpretation of specimens, which could affect the reported benefit of VLE in finding dysplasia. However, this manuscript focuses on neoplasia where there is less inter observer variability compared to lowgrade dysplasia. Finally, as a non-randomized study conducted mostly at large BE referral centers with possibly higher pre-test probability of neoplasias, it is plausible that their validity in a community setting is limited. However, the large sample size, its heterogeneity, plus variation in technique by site likely restore at least some of the external validity of the findings. This registry-based study demonstrates the potential for VLE to fill clinically relevant gaps in our ability to evaluate and manage BE. Physicians perceived significant value of VLE across the BE surveillance and treatment paradigm. Biopsy confirmation demonstrated benefits of VLE for both treatment na¨ıve and post treatment surveillance, although pathology results did not always align with physician perception, most likely due to limitations of the technology and image criteria at the time of study. Given expected refinement and validation of image interpretation, and the availability of laser-marking for more accurate biopsy targeting, VLE is well positioned to enhance our ability to identify and target advanced disease and enable a more efficient endoscopic examination with higher yield of tissue acquisition.VLE is well positioned to enhance our ability to identify and target advanced disease and enable a more efficient endoscopic examination with higher yield of tissue acquisition of burnout among workers.

The program included nine in-class nutrition lessons coordinated with garden activities

These spheres of influence are multifaceted and include factors such as income, ethnicity and cultural values and settings such as schools and retail food establishments. Consequently, measurable progress in reducing childhood obesity requires a multifaceted approach: a coordinated, comprehensive program that integrates messages regarding nutrition, physical activity and health with a child’s immediate environment and surrounding community . Adequate access to healthy food and physical recreation opportunities is essential to promote sustained behavior changes . Schools and after-school programs provide a unique setting for this approach, as they provide access to children, parents, families, educators, administrators and community members . The purpose of this article is to examine garden-enhanced nutrition education and Farm to School programs. Further, a questionnaire was developed and distributed to UC Cooperative Extension advisors and directors to assess their role in garden-enhanced nutrition education and Farm to School programs. Results from this questionnaire highlight UCCE’s integral role in this field.School gardens were first implemented in the United States at the George Putnam School in Roxbury, Massachusetts, in 1890, and by 1918 there was at least one in every state . During World Wars I and II, more than a million children were contributing to U.S. food production with victory gardens, which were part of the U.S. School Garden Army Program . More recently, incorporating gardens into the educational environment has become more popular worldwide,plastic pots 30 liters due partly to the appreciation of the importance of environmental awareness and integrated learning approaches to education .

As the agricultural powerhouse of the nation , California is poised to serve as a model for agriculture-enhanced nutrition and health education. Within California, the impetus to establish gardens in every school gained momentum in 1995, when then-State Superintendent of Public Instruction Delaine Eastin launched an initiative to establish school gardens as learning laboratories or outdoor classrooms . Assembly Bill 1535 created the California Instructional School Garden Program, allowing the California Department of Education to allocate $15 million for grants to promote, develop and sustain instructional school gardens. About 40% of California schools applied for these grants, and $10.9 million was awarded . It has been repeatedly shown that garden-enhanced nutrition education has a positive effect on children’s fruit and vegetable preferences and intakes . For example, after a 17-week standards-based, garden-enhanced nutrition education program, fourth-grade students preferred a greater variety of vegetables than did control students.For example, students learned that plants and people need similar nutrients. Many of these improvements persisted and were maintained at a 6-month follow-up assessment . In a similar study of a 12-week program combining nutrition lessons with horticulture, sixth grade students likewise improved their vegetable preferences and consumption . In addition, after a 13-week garden-enhanced nutrition program, middle school children ate a greater variety of vegetables than they had initially . While garden-enhanced nutrition education is one innovative method to improve children’s vegetable preferences and intake, researchers and educators consistently call for multi-component interventions to have the greatest impact on student health outcomes. Suggested additional components include classroom education, Farm to School programs, healthy foods available on campus, family involvement, school wellness policies and community input .

Moreover, the literature indicates that providing children with options to make healthy choices rather than imposing restrictions has long-term positive effects on weight . Taken together, it is reasonable to suggest that we are most likely to achieve long-lasting beneficial changes by coordinating a comprehensive garden-enhanced nutrition education program with school wellness policies, offering healthy foods on the school campus, fostering family and community partnerships and incorporating regional agriculture.Farm to School programs connect K-12 schools and regional farms, serving healthy, local foods in school cafeterias or classrooms. General goals include improving student nutrition; providing agricultural, health and nutrition education opportunities; and supporting small and mid-sized local and regional farms . Born through a small group of pilot projects in California and Florida in the late 1990s, Farm to School is now offered in all 50 states, with more than 2,000 programs nationwide in 2010 . The dramatic increase in the number and visibility of Farm to School programs can likely be attributed to factors including heightened public awareness of childhood obesity, expanding access to local and regional foods in school meals, concerns about environmental and agricultural issues as well as the sustainability of the U.S. food system. Farm to School programs provide a unique opportunity to address both nutritional quality and food system concerns. From a nutrition and public health standpoint, these programs improve the nutritional quality of meals served to a large and diverse population of children across the country. From a food systems and economic perspective, Farm to School programs connect small and mid-sized farms to the large, stable and reliable markets created by the National School Lunch Program .

Farm to School programs require partnerships that include a state or community organization, a local farmer or agricultural organization, a school nutrition services director and parents. Historically, Farm to School programs are driven, supported and defined by a community. Because they reflect the diverse and unique communities they serve, individual Farm to School programs also vary from location to location, in addition to sharing the characteristics described above. The first national Farm to School programs were initiated in 2000 and soon gained momentum in California, with support from the USDA Initiative for Future Agriculture and Food Systems as well as the W.K. Kellogg Foundation. In 2005, Senate Bill 281 established the California Fresh Start Program to encourage and support additional portions of fresh fruits and vegetables in the School Breakfast Program. This bill also provided the California Department of Education with $400,000 for competitive grants to facilitate developing the California Fresh Start Program . Concomitant with the growth of Farm to School programs, the National Farm to School Network was formed in 2007 with input from over 30 organizations and today engages food service, agricultural and community leaders in all 50 states. The evolution of this network has influenced school food procurement and nutrition/ food education nationwide .Evaluations of Farm to School impact have been conducted since the program’s inception. A 2008 review of 15 Farm to School evaluation studies, which were conducted between 2003 and 2007, showed that 11 specifically assessed Farm to School–related dietary behavior changes . Of these 11 studies, 10 corroborated the hypothesis that increased exposure to fresh Farm to School produce results in positive dietary behavior changes. In addition, a 2004-2005 evaluation of plate waste at the Davis Joint Unified School District salad bar showed that 85% of students took produce from the salad bar and that 49% of all selected salad bar produce was consumed . Additionally, school record data demonstrates that throughout the 5 years of the 2000-to-2005 Farm to School program, overall participation in the school lunch program ranged from a low of 23% of enrollment to a high of 41%, with an overall average of 32.4%. This compared to26% participation before salad bars were introduced. Overall participation in the hot lunches averaged 27% of enrollment . While Farm to School evaluations generally indicate positive outcomes ,round plastic pots conclusive statements regarding the overall impact of such programs on dietary behavior cannot be made. This can be attributed to the substantial variation in Farm to School structure from district to district, and variation in the study design and methodologies of early program evaluations. Methods for evaluating dietary impact outcomes most commonly include using National School Lunch Program participation rates and food production data as proxies for measuring consumption.

Additional evaluation methods include using self-reported measures of consumption such as parent and student food recalls or frequency questionnaires, and direct measures of consumption such as school lunch tray photography and plate waste evaluation. There are relatively few studies using an experimental design to evaluate the impact of Farm to School programs on fruit and vegetable intake, and even fewer of these studies use controls. Moreover, the Farm to School evaluation literature has no peer-reviewed dietary behavior studies using a randomized, controlled experimental design, which is undoubtedly due to the complex challenges inherent in community research. For example, schools may view the demands of research as burdensome or may question the benefits of serving as control sites. Due partly to its year-round growing season, California has more Farm to School programs than most, if not all, states. UC Davis pioneered some of the early uncontrolled studies quantifying Farm to School procurement, costs and consumption. UC ANR is now conducting new controlled studies to collect more rigorous data, which will differentiate outcomes of Farm to School programs from those due to other environmental factors. To clarify the role of UC ANR in garden-based nutrition education and Farm to School programs, a questionnaire was developed and administered through Survey Monkey in November 2011. This survey was sent to 60 UCCE academic personnel, including county directors; Nutrition, Family and Consumer Sciences advisors; 4-H Youth Development advisors; and others. For the purposes of this questionnaire, Farm to School was broadly defined as a program that connects K-12 schools and local farms and has the objectives of serving healthy meals in school cafeterias; improving student nutrition; providing agriculture, health and nutrition education; and supporting local and regional farmers. Survey. A cover letter describing the purpose of the survey and a link to the questionnaire was emailed to representatives from all UCCE counties. The questionnaire was composed of 26 items that were either categorical “yes/no/I’m not sure” questions or open-ended questions allowing for further explanation. An additional item was provided at the end of the questionnaire for comments. Respondents were instructed to return the survey within 11 days. A follow-up email was sent to all participants after 7 days. This protocol resulted in a 28% response rate, typical in a survey of this kind. Respondents represented 21 counties, with some representing more than one county; in addition, one was a representative from a campus-based unit of ANR. Questionnaire respondents included three county directors, six NFCS advisors, four 4-HYD advisors, one NFCS and 4-HYD advisor, and three other related UCCE academic personnel . The responding counties were Riverside, San Mateo and San Francisco; San Bernardino, Stanislaus and Merced; Contra Costa, Yolo, Amador, Calaveras, El Dorado and Tuolumne; Mariposa, Butte, Tulare, Alameda, ShastaTrinity, Santa Clara, Ventura and Los Angeles. Farm to School and school gardens. All 21 counties responding to the survey reported that they had provided a leadership role in school gardens, after-school gardens and/or Farm to School programs during the previous 5 years . Five out of 17 respondents reported that their counties provided a leadership role in Farm to School programs. Fourteen out of 17 respondents indicated that they individually played a leadership role in school garden programs, including serving as a key collaborator on a project, organizing and coordinating community partners, acting as school/agriculture stakeholders and/or serving as a principal investigator, coprincipal investigator or key collaborator on a research study. The most frequently reported reasons for having school and after-school gardens were to teach nutrition, enhance core academic instruction and provide garden produce . Additional reasons cited in the free responses included to study the psychological impacts of school gardens, enhance science and environmental education, teach composting, increase agricultural literacy, teach food origins, participate in service learning and provide a Gardening Journalism Academy. Reasons for success. The factors most frequently cited as contributing to successful school and after-school garden and Farm to School programs were community and nonparent volunteers, outside funding and enthusiastic staff . The 17 respondents indicated that the success of these programs was also aided by the multidisciplinary efforts within UC ANR , Farm Bureau, Fair Board and 4-H Teens as Teachers. Barriers. The most common factors cited as barriers to school and after-school gardens and Farm to School programs were lack of time and lack of knowledge and experience among teachers and staff . Additional barriers included lack of staff, cutbacks, competing programs for youth and lack of after-school garden-related educational materials for mixed-age groups. With regard to the Farm to School programs, one respondent perceived increased expense to schools, absence of tools to link local farmers with schools, a lack of growers and a lack of appropriate facilities in school kitchens.

Forward osmosis technology is also commonly used for food and drug processing

Area with high N2O emission has a relatively lower oxygen concentration due to the expansion of nutrients runoff from land. To diminish the negative environmental impacts, fertigation treatment could reduce the amount of nitrogen and nutrients input to the soil, prevent over fertilization, and excess nutrient runoff to the river. Forward osmosis has many advantages regard saving physical footprints. High waste water recovery rate, minimized resupply, and low energy cost can facilitate the sustainability of forward osmosis. However, forward osmosis has a lower membrane fouling propensity compared to other pressure-driven membrane processes. Forward osmosis is usually applied as pretreatment of reverse osmosis, the total energy consumption of a combination of FO and RO is lower than reverse osmosis alone. Moreover, osmotic backwashing can be compelling to restrict the membrane while reducing energy consumption at the same time. In the situation when Nanofiltration served as post-treatment combined with fertilizer draw forward osmosis can backwash the excess fertilizer replenishment and turn it into concentrated fertilizer draw solutions. The energy consumption of FDFO brackish water recovery using cellulose triacetate is affected by draw solution concentration , flow rates ,fodder systems for cattle and membrane selection. Membrane orientation and the flow rates have a minor effect on specific energy consumption compared to draw solution concentration. A diluted fertilizer draw solution can boost the system’s performance while a higher draw solution concentration can lower the specific energy consumption.

Moreover, a lower flow rate with a higher draw solution concentration can diminish the energy consumption of fertilizer draw forward osmosis to the lowest. This additional process would increase the energy consumption of the system. However, nanofiltration is necessary for desalination and direct fertigation treatment.The energy consumption of the nanofiltration process is determined by the environmental impacts, such as recovery rate, membrane lifetime, and membrane cleaning. Forward osmosis technology performs a 40-50% reduction in specific energy consumption compared to other alternatives. As a result, FO technology has the potential for wide adoption in drinking water treatment. Another area of application of FO usage is seawater desalination/brine removal, direct fertigation, wastewater reclamation, and wastewater minimization. Without the draw solution recovery step, forward osmosis could be applied as osmotic concentration. For example,fertilizer-draw forward osmosis is widely accepted for the freshwater supply and direct fertigation. However, in terms of the evaporative desalination process, it is more practical to treat the water with a lower total dissolved solid /salinity. Forward osmosis technology can be combined with other treatment methods such as reverse osmosis, nanofiltration, or ultrafiltration for different water treatment purposes. To be more specific, forward osmosis can be an alternative pre-treatment in conventional filtration/separation system ; an alternative process to conventional membrane treatment system ; a post-treatment process to recycle the volume of excess waste . The standalone forward osmosis process usually combines with additional post-treatment to meet the water quality standards for different purposes.

Forward osmosis has been researched in the past. In this review, we focused on fertilizer drawn forward osmosis, which can not only remove brine but also reduce multiple nutrient inputs such as nitrogen, phosphorous, potassium, and so on. Since a proper draw solution can reduce the concentration polarization, the draw solution selection becomes vital for both FO and FDFO processes. Moreover, different fertilizer draws solutions have various influences on energy consumption. The nutrient concentrations of treated water are controllable using the fertilizer-drawn forward osmosis treatment method. The composition of nutrients can be adjusted in the draw solution to produce water with different ratios of nutrients, which makes fertilizer draw forward osmosis a nearly perfect treatment method for direct fertigation. For the purpose of reducing N2O emissions, the removal rate of nitrogen in fertigation water is required to be improved using fertilizer drawn forward osmosis and nanofiltration. When nanofiltration is applied as post-treatment with fertilizer drawn forward osmosis, the nitrogen removal rate can reach up to 82.69% while using SOA as the draw solution. This number shows that treatment of fertigation can reach a higher standard of water quality attenuating nitrogen concentrations. As a result, lower nitrogen input in fertigation can significantly decrease the nitrous oxide emission from the soil for sustainable agricultural use. Forward osmosis can be also combined with other treatment methods to resolve the freshwater shortage problem. Despite the traditional seawater desalination treatment incorporating forward osmosis and reverse osmosis, the hybrid process of reverse osmosis and fertilizer drawn forward osmosis can remove the brine from water and lower the final nutrient concentration with a higher recovery rate. Lastly, the value of water flux, recirculation rate, draw solution concentration, membrane lifetime, and membrane cleaning can all be adjusted to minimize energy consumption as much as possible. In conclusion, FO and FDFO technologies are both environmentally friendly and economically for desalination and fertigation.

Evapotranspiration estimation is important for precision agriculture, especially precision water management. Mapping the ET temporally and spatially can identify variations in the field, which is useful for evaluating soil moisture and assessing crop water status. ET estimation can also benefit water resource management and weather forecast. ET is a combination of two separate processes, evaporation and transpiration . Evaporation is the process whereby liquid water is converted to water vapor through latent heat exchange. Transpiration is the process of the vaporization of liquid water contained in plant tissues,fodder sprouting system and the vapor removal to the atmosphere. The current theory for transpiration is constituted by the following three steps. First, the conversion of liquid-phase water to vapor water causes canopy cooling from latent heat exchange. Thus, canopy temperature can be used as an indicator of ET. Second, diffusion of water vapor from inside plant stomata on the leaves to the surrounding atmosphere. Third, atmospheric air mixing by convection or diffusion transports vapor near the plant surfaces to the upper atmosphere or off-site away from the plant canopy. Usually, evaporation and transpiration occur simultaneously.These direct ET methods, however, are usually point-specific or area-weighted measurements and cannot be extended to a large scale because of the heterogeneity of the land surface. The experimental equipment is also costly and requires substantial expense and effort, such as lysimeters, which are only available for a small group of researchers. For indirect methods, there are energy balance methods and remote sensing methods. For energy balance methods, Bowen ratio and eddy covariance have been widely used in ET estimation. However, they are also area-weighted measurements. Remote sensing techniques can detect variations in vegetation and soil conditions over space and time. Thus, they have been considered as some of the most powerful methods for mapping and estimating spatial ET over the past decades. Remote sensing models have been useful in accounting for the spatial variability of ET at regional scales when using satellite platforms such as Landsat and ASTER. Since the satellite started being applied, several remote sensing models have been developed to estimate ET, such as surface energy balance algorithm for land, mapping evapotranspiration with internalized calibration, the dual temperature difference, and the Priestley–Taylor TSEB. Remote sensing techniques can provide information such as normalized difference vegetation index , leaf area index , surface temperature, and surface albedo. Related research on these parameters has been discussed by different researchers. As a new remote sensing platform, researchers are very interested in the potential of small UAVs for precision agriculture, especially on heterogenous crops, such as vineyard and orchards.

UAVs overcome some of the remote sensing limitations faced by satellite. For example, satellite remote sensing is prone to cloud cover; UAVs are below the clouds. Unlike satellites, UAVs can be operated at any time if the weather is within operating limitations. The satellite has a fixed flight path; UAVs are more mobile and adaptive for site selection. Mounted on the UAVs, lightweight sensors, such as RGB cameras, multispectral cameras, and thermal infrared cameras, can be used to collect high-resolution images. The higher temporal and spatial resolution images, relatively low operational costs, and the nearly real-time image acquisition, make the UAVs an ideal platform for mapping and monitoring ET. Many researchers have already used UAVs for ET estimation, as shown in Table 1. For example, in Ortega-Farías et al. implemented a remote sensing energy balance algorithm for estimating energy components in an olive orchard, such as incoming solar radiation, sensible heat flux, soil heat flux, and latent heat flux. Optical sensors were mounted on a UAV to provide high spatial resolution images. By using the UAV platform, experiment results show that the RSEB algorithm can estimate latent heat flux and sensible heat flux with errors of 7% and 5%, respectively. It demonstrated that UAV could be used as an excellent platform to evaluate the spatial variability of ET in the olive orchard.There are two objectives for this paper. First, to examine current applications of UAVs for ET estimation. Second, to explore the current uses and limitations of UAVs, such as UAVs’ technical and regulatory restrictions, camera calibrations, and data processing issues. There are many other ET estimation methods, such as surface energy balance index, crop water stress index , simplified surface energy balance index, and surface energy balance system, which have not been applied with UAVs. Therefore, they are out of the scope of this article. This study is not intended to provide an exhaustive review of all direct or indirect methods that have been developed for ET estimation. The rest of the paper is organized as follows: Section 2 introduces different UAV types being used for ET estimation. Several commonly used lightweight sensors are also compared in Section 2. The ET estimation methods being used with UAV platforms, as shown in Table 1, are discussed. In Section 3, different results of ET estimation methods and models are compared and discussed. Challenges and opportunities, such as thermal camera calibration, UAV path planning, and image processing, are discussed in Section 4. Lastly, the authors share views regarding ET estimation with UAVs in future research and draw conclusive remarks. Many kinds of UAVs are used for different research purposes, including ET estimation. Some popular UAV platforms are shown in Figure 1. Typically, there are two types of UAV platforms, fixed-wings and multirotors. Fixed-wings can usually fly longer with a larger payload. They can usually fly for about 2 h, which is suitable for a large field. Multirotors can fly about 30 min, which is suitable for short flight missions. Both of them have been used in agricultural research, such as, which promises great potential in ET estimation.Mounted on UAVs, many sensors can be used for collecting UAV imagery, such as multispectral and thermal images, for ET estimation. For example, the Survey 3 camera has four bands, blue, green, red, and near-infrared , with a spectral resolution of 4608 × 3456 pixels, and a spatial resolution of 1.01 cm/pixel. The Survey 3 camera has a fast interval timer, 2 s for JPG mode, and 3 s for RAW + JPG mode. Faster interval timer would benefit the overlap design for UAV flight missions, such as reducing the flight time, and enabling higher overlapping. Another multi-spectral camera being commonly used is the Rededge M. The Rededge M has five bands, which are blue, green, red, near-infrared, and red edge. It has a spectral resolution of 1280 × 960 pixel, with a 46field of view. With a Downwelling Light Sensor , which is a 5-band light sensor that connects to the camera, the Rededge M can measure the ambient light during a flight mission for each of the five bands. Then, it can record the light information in the metadata of the images captured by the camera. After the camera calibration, the information detected by the DLS can be used to correct lighting changes during a flight, such as changes in cloud cover during a UAV flight. The thermal camera ICI 9640 P has been used for collecting thermal images as reported in. The thermal camera has a resolution of 640 × 480 pixels. The spectral band is from 7 to 14 µm. The dimensions of the thermal camera are 34 × 30 × 34 mm. The accuracy is designed to be ±2 C. A Raspberry Pi Model B computer can be used to trigger the thermal camera during flight missions. The SWIR 640 P-Series , which is a shortwave infrared camera, can also be used for ET estimation. The spectral band is from 0.9 µm to 1.7 µm. The accuracy for the SWIR camera is ±1 C. It has a resolution of 640 × 512 pixels.

Crop yields can also vary endogenously in response to demand and price changes

Typically, they allow for endogenous structural adjustments in land use, management, commodity production, and consumption in response to exogenous scenario drivers . However, with several components of productivity parameters endogenously determined, it can be difficult to isolate the potential role of livestock efficiency changes due to technological breakthroughs or policy incentives. For example, as production decreases due to decreasing demand, so could productivity. In this case, a design feature can be a design faw for sensitivity analysis and policy assessment focused on individual key system parameters, even if model results can be further decomposed to disentangle endogenous and exogenous productivity contributions . Accounting-based land sector models, such as the FABLE Calculator, which we also employ in this current study, can offer similarly detailed sector representation, without the governing market mechanisms, thus allowing fully tunable parameters for exploring policy impacts . This feature facilitates quantifying uncertainty and bounding estimates through sensitivity analyses. The FABLE Calculator is a sophisticated land use accounting model that can capture several of the key determinants of agricultural land use change and GHG emissions without the complexity of an optimization based economic model. Its high degree of transparency and accessibility also make it an appealing tool to facilitate stakeholder engagement.This paper explores the impacts of healthier diets and increased crop yields on U.S. GHG emissions and land use,dutch buckets as well as how these impacts vary across assumptions of future livestock productivity and ruminant density in the U.S. We employ two complementary land use modeling approaches.

The first is the FABLE Calculator , a land use and GHG accounting model based on biophysical characteristics of the agricultural and land use sectors with high agricultural commodity representation. The second is a spatially-explicit partial equilibrium optimization model for global land use systems . The combination of these modeling approaches allows us to provide both detailed representation of agricultural commodities with high flexibility in scenario design and a dynamic representation of land use in response to known economic forces , qualities that are difficult to achieve in a single model. Both modeling frameworks allow us to project to 2050 U.S. national scale agricultural production, diets, land-use, and carbon emissions and sequestration under varying policy and productivity assumptions. Our work makes several advances to sustainability research. First, using agricultural and forestry models that capture market and intersectoral dynamics, this is the first non-LCA study to examine the sustainability of a healthier average U.S. diet . Second, using two complementary modeling approaches, this is the first study to explore the GHG and land use effects of the interaction of healthy diets and agricultural productivity. Specifically, we examined key assumptions about diet, livestock productivity, ruminant density, and crop productivity. Two of the key production parameters we consider—livestock productivity and stocking density—are affected by a transition to healthier diets but have not been extensively discussed in the agricultural economic modeling literature. Third, we isolate the effects of healthier diets in the U.S. alone, in the rest of the world, and globally, which is especially important given the comparative advantage of U.S. agriculture in global trade.To model multiple policy assumptions across dimensions of food and land use and have full flexibility in terms of parameter assumptions and choice of underlying data sets, we customized a land use accounting model built in Excel, the FABLE Calculator , for the U.S. Below we describe the design of the Calculator, but for more details we direct the reader to the complete model documentation .

The FABLE Calculator represents 76 crop and livestock products using data from the FAOSTAT database. The model first specifies demand for these commodities under selected scenarios , the Calculator computes agricultural production and other metrics, land use change, food consumption, trade, GHG emissions, water use, and land for biodiversity. The key advantages of the Calculator include its speed, the number and diversity of scenario design elements , simplicity, and its transparency. However, unlike economic models using optimization techniques, the Calculator does not consider commodity prices in generating the results, does not have any spatial representation, and does not represent different production practices. The following assumptions can be adjusted in the Calculator to create scenarios: GDP, population, diet composition, population activity level, food waste, imports, exports, livestock productivity, crop productivity, agricultural land expansion or contraction, reforestation, climate impacts on crop production, protected areas, post-harvest losses, bio-fuels. Scenario assumptions in the Calculator rely on “shifters” or time-step-specific relative changes that are applied to an initial historic value using a user-specified implementation rate. The Calculator performs a model run through a sequence of steps or calculations, as follows: calculate human demand for each commodity; calculate livestock production; calculate crop production; calculate pasture and cropland requirements; compare the land use requirements with the available land accounting for restrictions imposed and reforestation targets; calculate the amount of feasible pasture and cropland; and calculate the feasible crop and livestock production; calculate feasible human demand; calculate indicators . See Figure S1 in the Supplementary Materials for a diagram of these steps. Using U.S. national data sources, we modified or replaced the US FABLE Calculator’s default data inputs and growth assumptions based on Food and Agriculture Organization data.

Specifically, we used crop and livestock productivity assumptions from the U.S. Department of Agriculture , grazing/stock intensity using literature from U.S. studies, miscanthus and switch grass bio-energy feed stock productivity assumptions from the Billion Ton study , updated beef and other commodity exports using USDA data, and created a “Healthy Style Diet for Americans” diet using the 2015–2020 USDA Dietary Guidelines for Americans . See SM Table S6 for all other US Calculator data and assumptions. We used these U.S.-specific data updates to construct U.S. diet, yield, and livestock scenarios and sensitivities . See for a full description of the other assumptions and data sources used in the default version of the FABLE Calculator.As a complement to the FABLE Calculator’s exogenously determined trade flows, we used GLOBIOM [a widely used and well-documented global spatially explicit partial equilibrium model of the forestry and agricultural sectors. Documentation can be found at the GLOBIOM github development site to capture the dynamics of endogenously determined international trade. Unlike the FABLE Calculator, GLOBIOM is a spatial equilibrium economic optimization model based on calibrated demand and supply curves as typically employed in economic models. GLOBIOM represents 37 economic production regions, with regional consumers optimizing consumption based on relative output prices, income, and preferences. The model maximizes the sum of consumer and producer surplus by solving for market equilibrium and using the spatial equilibrium modeling approach described in McCarl and Spreen and Takayama and Judge . Product-specific demand curves and growth rates over time allow for selective analysis of preference or dietary change through augmenting demand shift parameters over time to reflect differences in relative demand for specific commodities . Production possibilities in GLOBIOM apply spatially explicit information aggregated to Simulation Units, which are aggregates of 5 pixels of the same altitude, slope, and soil class, within the same 30 arcmin pixel, and within the same country. Land use, production and prices are calibrated to FAOSTAT from the 2000 historic period. Production systems parameters and emissions coefficients for specific crop and livestock technologies are based on detailed biophysical process models,grow bucket including EPIC for crops and RUMINANT for livestock . Livestock and crop productivity changes are reflected by both endogenous and exogenous components. For crop production, GLOBIOM yields can be shifted exogenously to reflect technological or environmental change assumptions and their associated impact on yields. Exogenous yield changes are accompanied by changes in input use intensity and costs .A similar approach has been applied in other U.S.-centric land sector models, including the intertemporal approach outlined in Wade et al. . Furthermore, reflecting potential yield growth with input intensification per unit area is consistent with observed intensification of some inputs in the U.S. agricultural system. This includes nitrogen fertilizer intensity , which grew approximately 0.4% per year from 1988 to 2018 .

Higher prices can induce production system intensification or crop mix shifts across regions to exploit regional comparative advantages. GLOBIOM accounts for several different crop management techniques, including subsistence-level , low input, high input, and high input irrigated systems. The model simulates spatiotemporal allocation of production patterns and bilateral trade fows for key agriculture and forest commodities. Regional trade patterns can shift depending on changes in market or policy factors that Baker et al. and Janssens et al. explore in greater detail in addition to providing a more comprehensive documentation of the GLOBIOM approach to international trade dynamics, including cost structures and drivers of trade expansion or contraction, or establishing new bilateral trade flows. This approach allows for flexibility in trade adjustments at both the intensive and extensive margins given a policy or productivity change in a given region. GLOBIOM has been applied extensively to a wide range of relevant topics, including climate impacts assessment , mitigation policy analysis , diet transitions , and sustainable development goals . We designed new U.S. and rest-of-the world diet and yield scenarios , and ran all scenarios at medium resolution for the U.S. and coarse resolution for ROW. We chose Shared Socioeconomic Pathway 2 macroeconomic and population growth assumptions for all parameters across all scenarios when not specified or overridden by scenario assumptions .We aligned multiple assumptions in the FABLE Calculator with GLOBIOM inputs and/or outputs to isolate the impacts of specific parameter changes in livestock productivity and ruminant density. Specifically, we used the same set of U.S. healthy diet shifters in both models, but aligned the US FABLE Calculator’s crop yields and trade assumptions with GLOBIOM outputs to isolate the effects of increasing the ruminant livestock productivity growth rate and reducing the ruminant grazing density using the Calculator . While we developed high and baseline crop yield inputs for GLOBIOM, actual yields are reported because of the endogenous nature of yields in GLOBIOM. This two model approach allows us to explore the impact of exogenous changes to the livestock sector that cannot be fully exogenous in GLOBIOM. Subsequent methods sections describe each of these scenarios and sensitivity inputs in greater detail.We constructed a “Healthy U.S. diet” using the “Healthy U.S.-style Eating Pattern” from the USDA and US Department of Health and Human Services’ 2015–2020 Dietary Guidelines for Americans . We use a 2600 kcal average diet. This is a reduction of about 300 kcal from the current average U.S. diet given that the current diet is well over the Minimum Dietary Energy Recommendations of 2075 kcal, computed as a weighted average of energy requirement per sex, age, and activity level and the population projections by sex and age class following the FAO methodology . The DGA recommends quantities of aggregate and specific food groups in units of ounces and cup-equivalents on a daily or weekly basis. We chose representative foods in each grouping to convert volume or mass recommendations into kcal/day equivalents and assigned groupings and foods to their closest equivalent US Calculator product grouping . For DGA food groups that consist of more than one US Calculator product group, e.g., “Meats, poultry, eggs”, we used the proportion of each product group in the baseline American diet expressed in kcal/day and applied it to the aggregated kcal from the DGA to get the recommended DGA kcal for each product group . We made one manual modification to this process by increasing the DGA recommendation for beef from a calculated value of 36 kcal/day to 50 kcal/day, since trends in the last decade have shown per capita beef consumption exceeding that of pork . This process led to a total daily intake of 2576 kcal for the healthy U.S. diet . The Baseline, average U.S. diet is modeled in the US FABLE Calculator using FAO reported values on livestock and crop production by commodity in weight for use as food in the U.S., applying the share of each commodity that is wasted, then allocating weight of each commodity to specific food product groups , converting weight to kcal, and finally dividing by the total population and days in a year to get per capita kcal/day. See the Calculator for more details and commodity specific assumptions . This healthy U.S. diet expressed in kcal was used directly in the Calculator as a basis for human consumption demand calculations for specific crop and livestock commodities.

The harvested materials were frozen and ground into fine powder in liquid nitrogen

Previous studies have shown that SL promotes photomorphogenesis by increasing HY5 level . However, the molecular links from SL signaling to HY5 regulation have remained unclear. Our results show that BZS1 mediates SL regulation of HY5 level and photomorphogenesis. Similar to hy5-215, BZS1-SRDX seedlings are partially insensitive to GR24 treatment under light , which indicates that BZS1 plays a positive role in SL regulation of seedling morphogenesis. Actually, BZS1 is the only member in the subfamily IV of B-box protein family that is regulated by SL , suggesting that BZS1 plays a unique role in SL regulation of photomorphogenesis. As BZS1 increases HY5 level, SL activation of BZS1 expression would contribute, together with inactivation of COP1 , to the SL-induced HY5 accumulation. On the other hand, the BZS1-SRDX plants showed normal branching phenotypes , which suggests that BZS1 is only involved in SL regulation of HY5 activity and seedling photomorphogenesis but not shoot branching. Our finding of BZS1 function in SL response further supports a key role for BZS1 in integration of light, BR and SL signals to control seedling photomorphogenesis . To generate 15N-labeled seeds, Arabidopsis plants were grown hydroponically in diluted Hoagland solution containing 10 mM K15NO3 . One eighth diluted Hoagland medium was used at seedling stage and 1/4 Hoagland medium was used when plant started to bolt. After the siliques were fully developed, 1/8 Hoagland medium was used till seeds were fully mature. For SILIA-IP-MS assay,strawberry gutter system the 14N- or 15N-labeled seeds were grown on Hoagland medium containing 10 mM K14NO3 or K15NO3, respectively, for 5 days under constant white light.

The seedlings were harvested and ground to fine powder in liquid nitrogen. Five grams each of 14N-labeled BZS1-YFP or YFP and 15N-labeled wild-type tissue power were mixed and total proteins were extracted using extraction buffer . After removing the cell debris by centrifugation, 20 μL GFP-Trap®_MA Beads were added to the supernatant and then incubated in the cold room for 2 h with constant rotating. The beads were washed three times with IP wash buffer . The proteins were eluted twice using 50 μL 2 × SDS sample loading buffer by incubating at 95°C for 10 min. The isotope labels were switched in repeat experiments. The eluted proteins were separated by NuPAGE® Novex 4–12% Bis-Tris Gel . After Colloidal Blue staining , the gel was cut into five fractions for trypsin digestion. In-gel digestion procedure was performed according to Tang et al. . Extracted peptides were analyzed by liquid chromatographytandem mass spectrometry . The LC separation was performed using an Eksigent 425 NanoLC system on a C18 trap column and a C18 analytical column . Solvent A was 0.1% formic acid in water, and solvent B was 0.1% formic acid in acetonitrile. The flow rate was 300 nL/min. The MS/MS analysis was conducted with a Thermo Scientific Q Exactive mass spectrometer in positive ion mode and data dependent acquisition mode to automatically switch between MS and MS/MS acquisition. The identification and quantification were done by pFind and pQuant softwares in an open search mode. The parameters of software were set as follows: parent mass tolerance, 15 ppm; fragment mass tolerance, 0.6 Da. The FDR of the pFind analysis was 1% for peptides. Arabidopsis TAIR10 database was used for data search. Three-day-old Arabidopsis seedlings expressing BZS1-YFP or YFP alone were grown under constant light and used for BZS1-COP1 co-immunoprecipitation assay. For the BZS1, HY5 and STH2 co-immunoprecipitation assay, about one-month-old healthy Nicotiana benthamiana leaves were infiltrated with Agrobacterium tumefaciens GV3101 harboring corresponding plasmids.

The plants were then grown under constant light for 48 h and infiltrated leaves were collected. Total proteins from 0.3 g tissue powder were extracted with 0.6 mL extraction buffer . The lysate was pre-cleared by centrifugation twice at 20,000 g for 10 min at 4°C, and then diluted with equal volume of extraction buffer without Triton X-100. Twenty microliter of Pierce Protein A Magnetic Beads coupled with 10 μg anti-GFP polyclonal antibody were added to each protein extract and incubated at 4°C for 1 h with rotation. The beads were then collected by DynaMag™-2 Magnet and washed three times with wash buffer . The bonded proteins were eluted with 50 μL 2 × SDS loading buffer by incubating at 95°C for 10 min. For western blot analysis, proteins were separated by SDS-PAGE electrophoresis and transferred onto a nitrocellulose membrane by semi-dry transfer cell . The membrane was blocked with 5% none-fat milk followed by primary and secondary antibodies. Chemiluminescence signal was detected using SuperSignal™ West Dura Extended Duration Substrate and FluorChem™ Q System . Monoclonal GFP antibody was purchased from Clontech, USA. Myc antibody and ubiquitin antibody were from Cell Signaling Technology, USA.HY5 and COP1 antibodies were from Dr. Hongquan Yang’s lab. Secondary antibodies goat anti-mouse-HRP or goat anti-rabbitHRP were from Bio-Rad Laboratories. Arundo donax is a tall grass that is native from the lower Himalayas and invaded the Mediterranean region, prior to its introduction in the America’s . It is suspected to first have been introduced to the United States in the 1700’s, and in the Los Angeles area in the 1820’s by Spanish settlers . Its primary use was for erosion control in drainage canals.

A number of other uses for Arundo have been identified. It is the source of reeds for single reed wind instruments such as clarinet and the saxophone . In Europe and Morocco Arundo is used for waste water treatment , such as nutrient and heavy metal removal, and water volume evapotranspiration. The high rate of evapotranspiration by stands of this species, used as a benefit in these countries, is one of the characteristics that is detrimental in the California ecosystems invaded by Arundo. By the 1990’s Arundo has infested tens of thousands of acres in California riparian ecosystems, and these populations affect the functioning of these systems in different ways. It increases the fire hazard in the dry season . The regular fires promoted by the dense Arundo vegetation, are changing the nature of the ecosystem from a flood-defined to a fire-defined system . During floods, Arundo plant material can accumulate in large debris dams against flood control structures and bridges, and interfere with flood water control management , and bridges across Southern California rivers. It can grow up to 8-9 m tall, and its large leaf surface area can cause the evapotranspiration of up to 3 x the amount of water that would be lost from the water table by the native, riparian vegetation . Displacement of the native vegetation results in habitat loss for desired bird species, such as the federally endangered Least Bell’s Vireo and the threatened Willow Flycatcher . Due to the problems listed above, removal of Arundo from California ecosystems has been one of the priorities of a variety of organizations and agencies involved in the management of the state’s natural resources, such as the California Department of Fish & Game, a number of resource conservation districts. In the practice of Arundo control,grow strawberry in containers both mechanical and chemical methods of Arundo control are applied, sometimes in combination , the choice of their use depending on timing, terrain, vegetation, and funding. The risks, costs, and effects of the different control methods were listed in the most recent Arundo and saltcedar workshop by . The timing of the eradication effort can be affected by a number of factors other than the biology of the target species, such as limitations due to bird nesting season, and funding availability. Ideally, the timing of any eradication effort, chemical or mechanical should be determined by the ecophysiology of the target species, in this case Arundo donax, rather than the calendar year. For chemical eradication, this has been recognized for a while, as stated by Nelroy Jackson of Monsanto, at the first Arundo workshop: “Timing of application for optimal control is important. Best results from foliar applications of Rodeo© or Roundup© are obtained when the herbicides are applied in late summer to early fall, when the rate of downward translocation of glyphosate would be greatest.” A similar statement has not yet been made for the timing of mechanical eradication methods, nor had the effect of timing on the effectiveness of mechanical eradication been identified. Mechanical eradication of Arundo can be attempted in many different manners. The most frequently used method is the cutting of the above ground material, the plant’s tall stems. Another method of mechanical eradication is digging out the underground biomass, the rhizomes. The cutting of stems can occur before and after herbicide applications.

The large amount of standing above ground biomass, up to 45 kg/m2 impedes the removal of the cut material, because the costs will be too high. The costs associated with the removal of the large biomass of the stems, has led to the use of “chippers” that will cut the stems into pieces of approximately 5 – 10 cm in situ. After these efforts, the chipped fragments are left in place. A small fraction of the fragments left behind after chipping will contain a meristem. The stem pieces of these fragments may have been left intact, or split lengthwise. In the second case the node at which the meristem at located will have been split as well. On many pieces with a meristem, the meristem itself may still be intact. These stem fragments might sprout and regenerate into new Arundo plants . If stems are not cut into small pieces, or removed after cutting, the tall, cut stems can be washed into the watershed during a flood event. This material can accumulate behind bridges and water control structures with possible consequences as described in the introduction. Meristems on the stems can also sprout, and lead to the establishment of new stands of Arundo at the eradication project site, or down river . A. donax stands have a high stem density. The outer stalks of dense stands will start to lean to the outside because the leaves produced during the growing season push the stems in the stand apart. After the initial leaning due to crowding, gravity will pull the tall outside stems almost horizontal . Throughout this report these outside hanging stems will be referred to as “hanging stems”. The horizontal orientation causes hormonal asymmetry in these stems. The main hormones involved are IAA , GA and ethylene . The unusual IAA and GA distributions cause the side shoots developing on these hanging stems, to grow vertically. IAA also plays an important role in plant root development , and may therefore have a stimulative effect on root emergence from the adventious shoot meristem on fragments that originated from hanging stems, that would be absent in stem fragments from upright stems. In a preliminary experiment comparing root emergence between stem fragments from hanging and upright stems, 38% of the hanging stemstem fragments developed roots, while none of the upright stem-stem fragments showed root emergence . These results indicated the need for further study into the possibility that new A. donax plants can regenerate from the stem fragments with shoot meristems that might be dispersed during mechanical Arundo removal efforts. In order to apply herbicides at that time that the rate of downward translocation of photosynthates and herbicide would be greatest, this time period has to be established. Carbohydrate distribution and translocation within indeterminate plants, such as Arundo, results from the balance between the supply of carbon compounds to and the nitrogen concentration in the different plant tissues. Carbon and nitrogen are the most important elements in plant tissues. Due to different diffusion rates of NO3 – and NH4 + in soil water versus that of CO2 in air, and differences in plant N and C uptake rates, plant growth will earlier become nitrogen limited than carbon limited. During plant development tissue nitrogen concentrations are diluted by plant growth , which is mainly based on the addition of carbohydrates to the tissues. When plant growth becomes nitrogen limited, the tissue will maintain the minimum nitrogen content needed for the nucleic acids and proteins that maintain metabolic function. At this low tissue nitrogen content, there is not enough nitrogen in an individual cell to provide the nucleic acids and proteins to support the metabolism of two cells, therefore the cells cannot divide. This means that the tissue cannot grow anymore , until it receives a new supply of nitrogen.

SA treatment and SA deficiency conferred by NahG did not significantly impact ABA levels

The results suggest that SA responses in tomato play a less important role in defense against Phytophthora capsici than to Pst. The impact of SA and plant activators on ABA accumulation was measured in tomato roots and shoots.However, ABA accumulation in non-stressed TDL and BTH treatments trended higher than those observed in salt-stressed plants that did not receive a plant activator treatment . Protection by TDL against Pst is likely the result of a triggered SAR response and not the result of an antagonistic effect on ABA levels. The efficacy of plant activators depends on the specific diseases targeted and the environmental context, which may present additional stressors to confound defense network signaling in the plant. A challenge for successful deployment of plant activators in the field is to manage the allocation, ecological and fitness costs that are associated with induced defenses . These costs can be manifested by reduced growth and reproduction, vulnerability to other forms of attack, and potential interference with beneficial associations . It would seem that the severity of these costs is conditioned in part by the milieu of abiotic stressors operative at any given time. Reactive oxygen species contribute to the initiation of SAR , are induced by SA and BTH , and are essential co-substrates for induced defense responses such as lignin synthesis . ROS also are important in modulating abiotic stress networks, for example in ABA signaling and response . The potential compounding effect of ROS generated from multiple stressors presents a dilemma in that the plant must reconcile these to adapt or else suffer the negative consequences of oxidative damage for failure to do so . Paradoxically, SA and BTH also are reported to protect plants against paraquat toxicity, blueberry grow pot which involves ROS generation for its herbicidal action . How plants balance ROS’s signaling roles and destructive effects within multiple stress contexts is unresolved and a critically important area of plant biology with relevance for optimizing induced resistance strategies in crop protection .

Although our experiments were conducted under highly controlled conditions, the results with TDL are encouraging and show that chemically induced resistance to bacterial speck disease occurs in both salt-stressed and non-stressed plants and in plants severely compromised in SA accumulation. Future research with plant activators should consider their use within different abiotic stress contexts to fully assess outcomes in disease and pest protection.These syntenies of wheat and rye chromosomes permit the formation of compensating translocations of wheat and rye chromosomes. A compensating translocation is genetically equivalent to either of the two parental chromosomes; that is, it carries all relevant genes, but not necessarily in the same order. On the other hand, homoeology between wheat group 1S and rye 1S arms permitted induction of homoeologous genetic recombination, thus the development of recombinants of much smaller segments of rye 1RS to wheat than the entire arm. Many of the present wheat cultivars developed by breeding for disease resistance carry a spontaneous centric rye-wheat translocation 1RS.1BL that has been very popular in wheat breeding programs . This translocation contains a short arm of rye chromosome 1, and the long arm of wheat chromosome 1BL . It must have occurred by misdivision of centromeres of the two group 1 chromosomes, and fusion of released arms and first appeared in two cultivars from the former Soviet Union, Aurora and Kavkaz. Rye chromosome arm 1RS in the translocation contains genes for resistance to insect pest and fungal disease but as it spread throughout wheat breeding programs it became apparent that the translocation was also responsible for a yield boost in the absence of pests and disease . Besides the presence of genes for resistance and yield advantage on 1RS, there is a disadvantage of 1RS in wheat due to the presence of the rye seed storage protein secalin, controlled by the Sec-1 locus on 1RS, and the absence of the wheat loci, Gli-B1 and Glu-B3, on the 1RS arm. Lukaszewski modified the 1RS.1BL translocation by removing the Sec-1 locus and adding Gli-B1 and Glu-B3 on the 1RS arm. Lukaszewski developed a set of wheat−rye translocations, derived from ‘Kavkaz’ winter wheat that added 1RS to wheat arms 1AL, 1BL, and 1DL in spring bread wheat ‘Pavon 76’, a high yielding spring wheat from CIMMYT.

Studies showed that the chromosomal position of 1RS in the wheat genome affected agronomic performance as well as bread-making quality . Using the 1RS translocation, Lukaszewski developed a total of 183 wheatrye short arm recombinant lines for group 1 chromosomes in a near-isogenic background of cv. Pavon 76 bread wheat. Out of 183 recombinant chromosomes, 110 were from 1RS- 1BS combinations, 26 from 1RS-1AS and 47 from1RS-1DS combinations. Mago et al. used some of these lines to link molecular markers with rust resistance genes on 1RS. These recombinant brea kpoint populations provide a powerful platform to locate region specific genes. Wheat roots have two main classes, seminal roots and nodal roots . Seminal roots originate from the scutellar and epiblast nodes of the germinating embryonic hypocotyls, and nodal roots, emerge from the coleoptiler nodes at the base of the apical culm . The subsequent tillers produce their own nodal roots, two to four per node and thus contribute towards correlation of root and shoot development . The seminal roots constitute from 1-14% of the entire root system and the nodal roots constitute the rest . Genetic variation for root characteristics was reported in wheat and other crop species . Genetic variability for seedling root number was studied among different Triticum species at diploid, tetraploid, and hexaploid level and it was found to be positively correlated with seed weight . In a hydroponic culture study in winter wheat, Mian et al. found significant genotypic differences in root and shoot fresh weights, number of roots longer than 40 cm, longest root length and total root length. Wheat genotypes with larger root systems in hydroponic culture were higher yielding in field conditions than those with smaller root systems . Also, wheat yield stability across variable moisture regimes was associated with greater root biomass production under drought stress . Studies in other cereal crops associated quantitative trait loci for root traits with the QTL for grain yield under field conditions. Champoux et al. provided the first report of specific chromosomal regions in any cereal likely to contain genes affecting root morphology. They reported that QTL associated with root traits such as root thickness, root dry weight per tiller, root dry weight per tiller below 30 cm,hydroponic bucket and root to shoot ratio shared common chromosomal regions with putative QTL associated with field drought avoidance/tolerance in rice. Price and Tomos also mapped QTL for root growth using a different population than that used by Champoux et al. in rice.

In a field study of maize recombinant lines, QTL for root architecture and above ground biomass production shared the same location . Tuberosa et al. reported the overlap of QTL for root characteristics in maize grown in hydroponic culture with QTL for grain yield in the field under well-watered and droughted regimes occurred in 8 different regions. They observed that QTL for weight of nodal and seminal roots were most frequently and consistently overlapped with QTL for grain yield in drought and well watered field conditions. Also, at four QTL regions, increase in weight of the nodal and seminal roots was positively associated with grain yield under both irrigation regimes in the field. There are a few reports on QTL studies for root traits in durum wheat but none has been reported in bread wheat. Kubo et al. studied root penetration ability in durum wheat. They used discs of paraffin and Vaseline mixtures as substitute for compact soil. Later, a QTL analysis was done for the number of roots penetrating the poly vinyl disc, total number of seminal and crown roots, root penetration index and root dry weight . The QTL for number of roots penetrating the poly vinyl disc and root penetration index was located on chromosome 6A and a QTL for root dry weight was located on 1B. Wang et al. demonstrated significant positive heterosis for root traits among wheat F1 hybrids. They showed that 27% of the genes were differentially expressed between hybrids and their parents. They suggested the possible role of differential gene expression in root heterosis of wheat, and possible other cereal crops. In a recent molecular study of heterosis, Yao et al. speculated that up-regulation of TaARF, an open reading frame encoding a putative wheat ARF protein, might contribute to heterosis observed in wheat root and leaf growth. Rye, wheat and barley develop 4-6 seminal roots which show a high degree of vascular segmentation . Feldman traced files of metaxylem to their levels of origin in maize root apex and showed their differentiation behind the root apex in three-dimensional model. In drier environments, Richards and Passioura demonstrated that genotypes, when selected for narrow root xylem vessels as against unselected controls, yielded up to 3%-11% more than the unselected controls depending upon their genetic background. This yield increase in the selections with narrow root vessel was correlated with a significantly higher harvest index, also higher biomass at maturity and kernel number. Huang et al. indicated the decrease in diameter of metaxylem vessel and stele with increase in temperature which resulted in decreased axial water flow in wheat roots. The decrease in axial water flow is very critical in conserving water during vegetative growth and making it available during reproductive phase of the plant. In a recent study on root anatomy, QTL for metaxylem were identified on the distal end of the long arm of chromosome 10 of rice . In another comparative study of rye DNA sequences with rice genome, the distal end of the long arm of chromosome 10 of rice showed synteny to the 1RS chromosome arm . The 1RS.1BL chromosome is now being used in many wheat breeding programs. Rye has the most highly developed root system among the temperate cereals and it is more tolerant to abiotic stresses such as drought, heat, and cold than bread wheat .

Introgression of rye chromatin into wheat may enlarge the wheat root system. Manske and Vlek reported thinner roots and higher relative root density for 1RS.1BL translocations compared with their non-translocated bread wheat checks in an acid soil, but not under better soil conditions. Repeated studies with the 1RS translocation lines of Pavon 76 have demonstrated a consistent and reproducible association between root biomass and the presence and position, of the rye 1RS arm . The increased grain yield of 1RS translocations under field conditions observed and reported earlier may be due to the consistent tendency of 1RS to produce more root biomass and also to the higher transpiration rate measured .Those authors have shown a significant increase of root biomass in wheat lines with 1RS translocations, and a positive correlation between root biomass and grain yield. All translocations of 1RS: with 1A, 1B, and 1D chromosomes have shown increased root biomass and branching as compared to Pavon 76 and there was differential expression for root biomass among these translocation lines with ranking 1RS.1AL > 1RS.1DL > 1RS.1BL > Pavon 76. In Colorado, the 1RS.1AL translocation with 1RS from Amigo showed 23% yield increase under field conditions over its winter wheat check, Karl 92 . Many present day bread wheat cultivars carry a centric rye-wheat translocation 1RS.1BL in place of chromosome 1B . Originally the translocation was thought to have been fixed because the 1RS arm of rye carries genes for resistance to various leaf and stem fungal diseases and insects . However, the translocation increased grain yield even in the absence of pathogens . It has been shown recently that this yield increase may be a direct consequence of a substantially increased root biomass . Studies by Ehdaie et al. 2003 showed a significant increase of root biomass in wheat lines with 1RS translocations and a positive correlation between root biomass and grain yield. In sand cultures, all three 1RS translocations on 1AL, 1BL, and 1DL in ‘Pavon 76’ genetic background showed clear position effects with more root biomass and root branching over Pavon 76 .