We conclude with ideas for future FEW nexus and governance research

The amount of energy consumed to pump water from Lake Havasu to Los Angeles in the Colorado River Aqueduct varies from year to year depending on the constancy of pumping. For example, while the lowest amount of water diverted to the Metropolitan Water District of Southern California was 0.55 MAF in 2005, the lowest amount of energy consumed was 1.3 million MWH in 2007. If water does not need to be pumped from Lake Havasu to Southern California consistently, the MWD is able to produce energy and sell it. Thus, the irregularity of the amount of energy used to transport water in the Colorado River Aqueduct correlates with the amount of energy bought or sold per fiscal year. The amount of water pumped to Southern California has, on average, increased from 2001 to 2016. In some years, such as in 2011 and 2012, MWD intentionally left water in Lake Mead so that it did not fall below “shortage conditions,”. The Central Arizona Project pumps 1.6 MAF of water up 2800 feet of elevation, 336 miles from Lake Havasu to Phoenix and Tucson. To do this, requires 2.8 million MWH of energy, which is supplied from a coal plant: Navajo Generating Station in Page, AZ. While this plant is not in the study area, it is considered a regional connection, and is important to consider given the large supply of energy it provides to the study area as the eighth-largest plant in the United States.In the LCRB, the dominant types of power generation are hydroelectricity and natural gas,macetas 7l with a yearly average energy production of 5.5 million MWH and 2.9 million MWH, respectively.

Hydroelectric production gradually decreased over the study time frame from roughly 6.6 million MWH in 2001 to 5.5 million MWH in 2016, while natural gas increased from 560,000 MWH in 2001 to 3 million MWH in 2016. Despite their individual trends over time, hydroelectricity and natural gas both follow a seasonal trend, with the highest net generation of energy occurring during the summer months, and natural gas peaking directly after hydroelectricity. The presence of solar in the region has grown since 2010, with the highest annual production in 2016 at about 860,000 MWH, and an average annual production of just 180,000 MWH. There is one wind power plant, but the amount of electricity this plant produces is negligible when compared to the other electricity sources. The price of electricity gradually increased from 8.7 cents/KWH in 2001 to 11.3 cents/KWH in 2016, while still following the same seasonal trend.The top six crops that took up at least 1% of production areas in at least one year of the study time frame, 2008 to 2015, included alfalfa , cotton , durum wheat , double-crop lettuce/durum wheat , lettuce , and citrus. In each year of food production analyzed, alfalfa had the highest percentage , followed by fallowed cropland, with average annual acreage of about 142,000, and 70,000, respectively, and a total average active cropland of about 271,000 acres. There was a slight overall upward trend of total production area from 294,000 acres in 2008 to 303,000 acres in 2015 of active cropland. This trend holds even when considering the drastic decrease of areas under production in 2009 at 142,000 acres down from 294,000 acres in 2008, and another small drop in 2015 by just 6000 acres. From 2012 to 2013, a drought year in the UCRB, areas under production increased from 292,000 acres to 295,000 acres.

The FEW nexus changes with environmental and economic stresses, depending on the flexibility of the governing and market systems. The scenarios were meant to be an example of how governance, market supply and demand, and climate vulnerabilities may impact the FEW nexus and create resource tipping points. In the study area, water governance particularly influences drought management and crop production strategies, giving the system less room to respond to climatic and economic changes.The first scenario depicts how the costs and supply of water, energy, and food production might change in an extreme drought situation. With a decrease in water availability, water and energy prices would increase, but agriculture production would roughly stay the same due to water governance in the region. This would have occurred, for example, if Lake Mead had decreased to 1030.75 ft. At or below an elevation of 1075 ft, Lake Mead is at a critical drought state. At 1050 ft, Lake Mead is below the capacity at which the Hoover Dam can produce hydroelectricity. If it were to stay at this elevation for an entire year, this would amount to a decrease of 36% of the average annual electricity generation from 2001 to 2016. Reduced water availability has previously been shown by Bain and Acker to result in higher operating costs, and higher prices of energy for hydroelectricity in the Colorado River Basin. Additionally, for those that pay for water from a utility, a drought of this magnitude could increase water prices. According to a report from the Public Policy Institute of California, the 2012-16 California drought resulted in an increase in water prices through drought surcharges due to increased supply and treatment costs for suppliers. However, those that rely on water rights for their water, such as on certain Indian Reservations, would continue to receive the same amount of water with no price increase. The Bureau of Reclamation could make a deal with Reservations to hold onto some of their water with some form of compensation. In this case, irrigation will decrease, which was assumed for the drought scenario.

However, a decrease in water available for irrigation does not necessarily mean production will decrease as seen in the increase of production of agricultural products during the drought year from 2012 to 2013, with a decrease in the total amount of water used for irrigation. However, higher energy costs to irrigate cropland coupled with higher water costs for farmers outside of Indian Reservations could potentially decrease the amount of production in areas that rely mostly on hydro power.Where the drought scenario depicts climate pressures on water availability, the global demand for alfalfa is a representation of demand for water. In this scenario, we look at the implications of the governance structure of the LCRB in supplying water in a static snapshot of global agricultural commodity markets. Specifically, we investigate the impact of a 3% increase in demand for alfalfa,planta frambuesa maceta the most widely cultivated crop in the study area. This scenario seems likely to occur due to the 160% increase in fodder exports from 2000 to 2010 from the United States, and 2.5% overall increase in fodder exports specifically from California, Nevada, or Arizona over the same 10-year period. Under this scenario, there would either be an increase in the overall total cropland from 271,500 acres to 279,600 acres, or a decrease in cropland for food crops. In either scenario, more water would be needed for crop production, assuming the current mode of production remains constant, with fodder production consuming the largest amount of water when compared to other crops. Demand on energy would increase for producing and transporting alfalfa and water, potentially meaning a higher demand for water to produce that energy. This increased demand on energy includes the energy needed to move water for irrigation, energy needed to export alfalfa, and increased demand for agricultural chemicals and machinery fuels.The goal of this study was to understand how the governance structure of the Colorado River constrains the utility of the nexus approach to deal with future stresses. A consideration of governance structures should be central to the development of food-energy-water nexus thinking to better understand and identify how stressing food, energy and/or water systems creates resource vulnerabilities and/or resource scarcities in all three sectors. To understand how food, energy, and water affect one another’s availability, individual sector units can be analyzed together to give a quantified picture of use.

In addition, price trends can be analyzed to look for correlations with other sector price trends or climatic changes. In the study area, we found that water is the limiting factor due to governance constraints, especially with predictions of increased drought in the future. The following sections describe the ways that governance constrains the possibility of implementation of FEW management strategies in the LCRB, and why therefore, it is a critical component of FEW nexus research. We also discuss how power, geopolitics, and institutional factors impact nexus implementation.First, policies limit the ability of resource systems to respond to market and climatic changes. In the LCRB in particular, we found that the Law of the River limits the prospect of responding to climatic changes such as increasing drought frequency and severity. With predictions by the IPCC that the southwestern United States is to become hotter and drier, a lack of adequate response due to rigid policy structures will impact all three sectors. The small response to drought in water used in agriculture in the study area was likely because much of the production occurs on Indian reservations, which have a high proportion of water rights in comparison to the rest of the Lower Basin states. While Metropolitan Water District leaving water in Lake Mead during drought years is a good example of a response to drought, it is a reactive response, similar to the DOI Interim Guidelines. Drought coupled with increased demand for water through more alfalfa production will strain water resources even further. Second, rigid policies in the most ‘geopolitical’ sector impacts the ability of that sector to respond to the needs of the other sectors. Management of the Colorado River is a complex geopolitical issue with many stakeholders, including governments of U.S. states and nation states, separated by rigid political boundaries. This directly impacts the ability of water managers to meld water allotments to current or predicted conditions. This is a very real concern as the IPCC has predicted that the Southwestern U.S. will experience higher temperatures and decreased precipitation. Since water is the life blood of this region, with over ¾ of the economy relying on its presence , drought will severely affect the region’s livelihood. In addition to drought, population growth could put more stress on the water system. Depending on the system analyzed, there will likely be a ‘limiting factor’. While some have argued against a focus on water , sector weights are context dependent. In semi-arid cases with access to a large amount of water, it is frequently a geopolitical issue, which often presents itself as transboundary conflict. Although we did not include Mexico in the analysis due to data constraints, it is well known that most years since 1960 the Colorado River has run dry before reaching the Sea of Cortéz. The impacts of this have presented themselves as a lack of access to a water resource in Northern Mexico that has resulted in social impacts such as a decline in the regional shrimp and fishing industries. The river’s riparian ecosystems were briefly restored through the implementation of Minute 319, a 2014 treaty between the U.S. & Mexico that authorized a pulse flow to the Gulf of California. This move “marked a sustainable reconciliation with the land and its people” by connecting communities back to a water source that was highly valued. Restoration projects such as this one harness the local to the international through governance, power, and the larger ecological and political systems at work. Entrenched in these cross-boundary water sources and restoration projects are politics of power at the international level. Through sub-national division of power at the state level, the U.S. monopolized control over the Colorado River, partly out of fear that “Mexico might lay claim to large quantities of the river’s flow,” a notion that maintains itself in the almost-century year-old Colorado River Compact. Transboundary politics therefore directly complicates the economic and hydraulic foundations of the nexus, specifically through divisions of power that persist through long-term sub-national agreements. While the Colorado River is an extreme form amongst global transnational boundaries, it should be considered at the regional-nexus level. Third, and similarly to the second, rigid policies in one of the sectors impacts the production and availability of resources in other sectors.

The impact of Phytophthora diseases on citrus production can be devastating

Tandem with a host of myriad of environmental matrices to affect the matrix effect, quantification of CECs is challenging both with GC-MS and LC-MS. Matrix effect exists due to the co-eluted, interfering compounds in the sample extract that have similar ions in the Mass Spectrometry or MS-MS segment. It may also arise from the interaction between the target analytes and those co-extracted matrix components during sample preparation and in the ionization chamber. The former is more common in GC-MS and GC-MS-MS analysis, and might be encountered in LC-MS and LC-MS-MS analysis. GC-MS and GC-MS-MS are still the commonly used techniques because of their wide availability in environmental laboratories. GC-MS or GC-MS-MS also suffers less from matrix effect that is more commonly observed in electrospray ionization -based LC-MS or LC-MS-MS. Environmental concentrations of CECs exist in the ng L -1 or µg L-1 ranges. Extraction is a necessary step to concentrate the analytes prior to instrumental analysis. Solid Phase Extraction is the most common technique applied sample preparation and purification in the analysis of CECs. SPE separation depends on the kind of solid stationary phase through which the sample is passed, and on the types of target compound. The target compounds adhere to the stationary phase,frambuesa o mora while impurities in the sample are washed away, obtaining a clear extract. This procedure uses a vacuum manifold and has the advantage that 12 or 24 solid phase extraction cartridges can be prepared simultaneously, thus minimizing time and effort for sample preparation. 

The target compounds are finally eluted from the stationary phase using an appropriate solvent. The effectiveness of solid phase extraction cartridges have been widely researched; the best being ENV+, Oasis HLB, Oasis MAXSPE, Oasis MCX, Strata-X, Lichrolut C18 and LiChrolut EN for pre-concentration in aqueous samples. Since most pharmaceuticals and personal care products are polar, non-volatile, and thermally labile compounds unsuitable for GC separation, derivatization is necessary after extraction and elution from the aqueous sample and prior to GC-MS analysis of polar compounds. Various derivatization agents have been applied to various CECs. However, this comes with inaccuracy of the method and it can affect the losses of analytes that cannot be fully derivatized. Also, derivatization uses highly toxic and carcinogenic diazomethane, or less frequently, acid anhydrides, benzyl halides, and aklylchloroformates. Derivatization can be incomplete, inhibiting completely the analysis of some compounds. Some compounds are also thermolabile and decompose during GC analysis. However, after derivatization, compounds improve in both volatility and thermal stability. The final step of sample preparation before elution is the clean up of the extract. This step is usually added to enhance the accuracy and reproducibility of the results by eliminating matrix effects and generally any impurities occurring in the final extract that can interfere with the analysis. The clean up step is usually performed with SPE cartridges, as described in. SPE is a step with a double goal: sample concentration and cleanup, and takes place before the derivatization. However, while sample clean up may help remove those interfering compounds, it is time consuming and runs the risk of losing analytes of interest, especially those that were polar to begin with.

Allowing better chromatographic separation allows the analytes to be eluted in an appropriate time interval, avoiding coeluting with matrix components. Nevertheless, matrix effect can hardly ever be eliminated. Initial method validation can help document and qualify the performance of the GC-MS to analyze the test compounds, as well as the pretreatment steps to concentrate and provide for injection into GCMS. Initial method validation provides method performance parameters such as method recoveries, precision, and matrix effect to deliver consistent estimation of the analyte concentrations. It is becoming crucial to properly assess the risk posed by the presence of CECs in the environment. This research has aimed to develop a multi-residue analytical method for GC-MS that has allowed for simultaneous monitoring of CECs. This provides the ease of evaluating different physical-chemical varieties of CECs simultaneously without having to undergo different processes for certain types of trace organic compounds. Since GC-MS has wide availability around the world, the multi-residue analytical method allows many researchers to gain a larger understanding of the derivatization and extraction processes possible for a multitude of contaminants. Thus, the occurrence, distribution, and fate of CECs will be better monitored and more efficiently regulated. In this study, we used N–Nmethyltriflouroacetamide to initially derivatize 50 compounds in GC-MS. This analytical method was developed using the approach by Yu and Wu such that 14 compounds in his study were derivatized and analyzed in GC-MS. In addition to his 14 compounds, this study has successfully included 1 additional anti-inflammatory drug, 2 cardiovascular drugs/beta blockers, 1 estrogen, 1 personal care product, 7 pesticides, and 4 plasticizers using MTBSTFA and GC-MS.

The work presented here consists of a meticulous and successful development of a method for 29 emerging compounds in tertiary treated greenhouse runoff water. High purity solvents such as Optima-LC/MS-grade MeOH, Optima-grade ethyl acetate , HPLC grade acetone and 37% hydrochloric acid were supplied by Fisher Scientific. Ethylenediaminetetraacetic acid disodium salt dehydrate was 99.7% from J.T. Baker Chemical Co.. N-tert-Butyldimethylsilyl-Nmethyltrifluoroacetamide, purity >97%, , was obtained from Sigma Aldrich. Pesticide grade glass wool was purchased from Supelco. Deionized water was in-house produced. Nitrogen 99.97% and helium 99.999% gases were purchased from Airgas. Both individual stock standard and isotopically labeled internal standard solutions were prepared on a weight basis in methanol. After preparation,producción macetas de 11 litros standards were stored at -20 °C in darkness. A mixture of all contaminants was then prepared by appropriate dilution of individual stock solutions in MeOH in volumetric flasks. For calculations of labeled diluted standards and internal standards see Supplementary Data. A 2-L aqueous solution at 400 µg L-1 , named as “spiking solution”, was freshly prepared in a volumetric flask every week during the project performance. A separate mixture of isotopically labeled internal standards and further dilutions, used for internal standard calibration, was similarly prepared in MeOH.After reviewing the scientific literature available and considering the analytes’ physical-chemical features and the type of target samples, the following extraction method protocol was used as a starting point. 1) Onehundred mL of deionized water was fortified at 200 ng L-1 of the target CECs in a volumetric flask. 2) In this study, we have chosen Waters Oasis HLB cartridge to pretreat polar and nonpolar compounds using the same extraction conditions. The resulting solution was then concentrated by SPE in a Waters Oasis HLB 60 mg, 3 mL cartridge , which was previously activated with 4 mL of methanol and then conditioned with 4 mL of deionized water. 3) Once the extraction was finished, the cartridge was dried under vacuum for 30 min to remove excess of water, and unless eluted immediately, samples were stored at -20 °C wrapped in aluminum foil. 4) The cartridge elution was carried out in 2×2 mL of MeOH. 5) Extract was then evaporated to dryness under a gentle nitrogen stream at room temperature and reconstituted in a 2 mL GC glass vial in a mixture of 900 μL of ethyl acetate and 100 μL of the derivatization agent MTBSTFA; and finally, 6) The resulting solution underwent 60 min at 70 °C to foster the derivatization reaction, and after, the extract was vortexed, cooled off, and then analyzed by GCMS.

Several parameters, such as concentration rate, sample size, and type of SPE cartridge were optimized. Sample pH adjustment and addition of chelating agents were also assessed for optimization. Each feature was tested in triplicate in the order described below. Once a parameter was optimized, it was incorporated in the method protocol for the optimization of the subsequent parameters. Sensitivity and accuracy were the criteria followed to select each parameter. In order to increase the method sensitivity, acquisition windows were established using the following criteria: 1) No more than 15 ions were monitored in each one; 2) The isotopically labeled internal standards were included in the same window as their corresponding analytes; and 3) The window had to be long enough to be trustworthy in case a change in the retention time took place. Having all this into consideration, two separate instrumental methods, Method 1 and Method 2, had to be created, both of them sharing the same chromatographic conditions. However, Method 1 and Method 2 differed in the acquisition windows as well as in the SIM ions monitored in each of them. Appendix A shows the target compounds and their SIM ions monitored for each of them recorded by Method 1 and Method 2 distributed in acquisition windows. One primary and two secondary ions, used for quantification and confirmation, respectively, were monitored in all cases except for 17β- estradiol, which presented a poor fragmentation, so only one secondary ion was registered. Acquisition stopped at min 29 and 25 for Method 1 and Method 2, respectively, to prevent damage and pollution of the MS detector. Eleven minutes of solvent delay were set in both methods to prevent damage in the filament. Therefore, each sample extract was intended to undergo two consecutive injections, one for Method 1 and then Method 2. Phytophthora species are Oomycete organisms within the Kingdom Stramenopila that can cause diseases on a wide variety of agricultural crops and non-cultivated plants. Worldwide, several species, including P. citrophthora, P. syringae, P. nicotianae, P. citricola, P. palmivora, and P. hibernalis, are pathogens of citrus. Within California, many of these including P. citrophthora, P. syringae, P. parasitica, and P. hibernalis have been recovered from citrus. These species are active at different times of the year with P. syringae and P. hibernalis present in the cooler seasons, P. parasitica during the summer, and P. citrophthora mostly causing disease during spring, fall, and winter. They are all capable of causing brown rot of citrus fruit in the orchard and after harvest in storage or during transit. P. citrophthora and P. parasitica will also cause root rot and gummosis in the orchard which can make the establishment of new plantings difficult, leading to slow tree decline and reductions in productivity once introduced.Epidemics have occurred as far back as 1863-1870 in Italy in which large numbers of lemon trees were destroyed due to gummosis caused by Phytophthora citrophthora and P. parasitica, along with additional outbreaks in nearby Greece where most of the lemon trees on the island of Paros were destroyed between 1869 to 1880. More recently, it was estimated that within California, Phytophthora root rot outbreaks can continue to lead to yield losses of up to 46% if left unmanaged. The California citrus industry is economically important for both the state and country. Fruit produced in California is primarily earmarked for fresh consumption, with the state producing roughly 59% of the total citrus grown within the United States, valued at around 2.4 billion dollars. Recently, P. syringae and P. hibernalis were designated quarantine pathogens in China, an important export country for the California citrus industry, following the detection of brown rot infected fruit shipped from California to Chinese ports. Both species were previously considered of minor importance, but in recent years, P. syringae has been commonly found causing brown rot of fruit during the winter harvest season in the major citrus production areas in the Central Valley of California. This subsequently led to the restriction of the California citrus trade and extensive monetary losses. As of 2016, California citrus exports to China, which is one of the top 15 export countries for California citrus, were valued at $133 million dollars , underlining the importance of preventing future trade restrictions to this important market due to phytosanitary issues caused by Phytophthora spp. The root and soil phases in the disease cycle of Phytophthora spp. are directly connected with the brown rot phase. Under favorable conditions, mainly wetness, high inoculum levels in the soil will cause root rot which can be especially detrimental in nurseries and in the establishment of new orchards. This is when disease management is most critical. It has been shown that trees grown in soil infested with P. parasitica or P. citrophthora prior to repotting to larger containers were later less afflicted with dieback or stunted growth when treated with soil applications of mefenoxam and fosetyl-Al than trees that were not treated. 

A transaction only succeeds if none of its reads are stale when the commit record is encountered

To optimize this process in cases where the view is small , the Corfuobject can create checkpoints and provide them to Corfu via a checkpoint call. Internally, Corfu stores these checkpoints on a separate shared log and accesses them when required on query_helper calls. Additionally, the object can forgo the ability to roll back before a checkpoint with a forget call, which allows Corfu to trim the log and reclaim storage capacity. The Corfu design enables other useful properties. Strongly consistent read throughput can be scaled simply by instantiating more views of the object on new clients. More reads translate into more check and read operations on the shared log, and scale linearly until the log is saturated. Additionally, objects with different in-memory data structures can share the same data on the log. For example, a name space can be represented by different trees, one ordered on the filename and the other on a directory hierarchy, allowing applications to perform two types of queries efficiently. We now substantiate our earlier claim that storing multiple objects on a single shared log enables strongly consistent operations across them without requiring complex distributed protocols. The Corfu runtime on each client can multiplex the log across objects by storing and checking a unique object ID on each entry; such a scheme has the drawback that every client has to play every entry in the shared log. For now, we make the assumption that each client hosts views for all objects in the system.

Later in the paper, we describe layered partitioning, which enables strongly consistent operations across objects without requiring each object to be hosted by each client,cultivo frambuesa and without requiring each client to consume the entire shared log. Many strongly consistent operations that are difficult to achieve in conventional distributed systems are trivial over a shared log. Applications can perform coordinated rollbacks or take consistent snapshots across many objects simply by creating views of each object synced up to the same offset in the shared log. This can be a key capability if a system has to be restored to an earlier state after a cascading corruption event. Another trivially achieved capability is remote mirroring; application state can be asynchronously mirrored to remote data centers by having a process at the remote site play the log and copy its contents. Since log order is maintained, the mirror is guaranteed to represent a consistent, system-wide snapshot of the primary at some point in the past. In Corfu, all these operations are implemented via simple appends and reads on the shared log. Corfu goes one step further and leverages the shared log to provide transactions within and across objects. It implements optimistic concurrency control by appending speculative transaction commit records to the shared log. Commit records ensure atomicity, since they determine a point in the persistent total ordering at which the changes that occur in a transaction can be made visible at all clients. To provide isolation, each commit record contains a read set: a list of objects read by the transaction along with their versions, where the version is simply the last offset in the shared log that modified the object.As a result, Corfu provides serializability with external consistency for transactions across objects.Corfu uses streams in an obvious way: each Corfu object is assigned its own dedicated stream.

If transactions never cross object boundaries, no further changes are required to the Corfu runtime. When transactions cross object boundaries, Corfu changes the behavior of its EndTX call to multiappend the commit record to all the streams involved in the write set. This scheme ensures two important properties required for atomicity and isolation. First, a transaction that affects multiple objects occupies a single position in the global ordering; in other words, there is only one commit record per transaction in the raw shared log. Second, a client hosting an object sees every transaction that impacts the object, even if it hosts no other objects. When a commit record is appended to multiple streams, each Corfu runtime can encounter it multiple times, once in each stream it plays. The first time it encounters the record at a position X, it plays all the streams involved until position X, ensuring that it has a consistent snapshot of all the objects touched by the transaction as of X. It then checks for read conflicts and determines the commit/abort decision. When each client does not host a view for every object in the system, writes or reads can involve objects that are not locally hosted at either the client that generates the commit record or the client that encounters it. We examine each of these cases: A. Remote writes at the generating client: The generating client – i.e., the client that executed the transaction and created the commit record – may want to write to a remote object. This case is easy to handle; as we describe later, a client does not need to play a stream to append to it, and hence the generating client can simply append the commit record to the stream of the remote object. B. Remote writes at the consuming client: A client may encounter commit records generated by other clients that involve writes to objects it does not host; in this case, it simply updates its local objects while ignoring updates to the remote objects. Remote-write transactions are an important capability.

Applications that partition their state across multiple objects can now consistently move items from one partition to the other. In our evaluation, we implement Apache ZooKeeper as a Corfu object, create a partitioned name space by running multiple instances of it,maceta cuadrada and move keys from one name space to the other using remote-write transactions. Another example is a producer consumer queue; with remote-write transactions, the producer can add new items to the queue without having to locally host it and see all its updates. C. Remote reads at the consuming client: Here, a client encounters commit records generated by other clients that involve reads to objects it does not host; in this case, it does not have the information required to make a commit/abort decision since it has no local copy of the object to check the read version against. To resolve this problem, we add an extra round to the conflict resolution process, as shown in Figure 5.3. The client that generates and appends the commit record immediately plays the log forward until the commit point, makes a commit/abort decision for the record it just appended, and then appends an extra decision record to the same set of streams. Other clients that encounter the commit record but do not have locally hosted copies of the objects involved can now wait for this decision record to arrive. Since decision records are only needed for this particular case, the Corfu object interface described in Section 5.1 is extended with an is Shared function, which is invoked by the Tango runtime and must return true if decision records are required. Significantly, the extra phase adds latency to the transaction but does not increase the abort rate, since the conflict window for the transaction is still the span in the shared log between the reads and the commit record. D. Remote reads at the generating client: Corfu does not currently allow a client to execute transactions and generate commit records involving remote reads. Calling an accessor on an object that does not have a local view is problematic, since the data does not exist locally; possible solutions involve invoking an RPC to a different client with a view of the object, if one exists, or recreating the view locally at the beginning of the transaction, which can be too expensive.

If we do issue RPCs to other clients, conflict resolution becomes problematic; the node that generated the commit record does not have local views of the objects read by it and hence cannot check their latest versions to find read-write conflicts. As a result, conflict resolution requires a more complex, collaborative protocol involving multiple clients sharing partial, local commit/abort decisions via the shared log; we plan to explore this as future work. A second limitation is that a single transaction can only write to a fixed number of Corfu objects. The multiappend call places a limit on the number of streams to which a single entry can be appended. As we will see in the next section, this limit is set at deployment time and translates to storage overhead within each log entry, with each extra stream requiring 12 to 20 bytes of space in a 1KB to 4KB log entry.The decision record mechanism described above adds a new failure mode to Tango: a client can crash after appending a commit record but before appending the corresponding decision record. A key point to note, however, is that the extra decision phase is merely an optimization; the shared log already contains all the information required to make the commit/abort decision. Any other client that hosts the read set of the transaction can insert a decision record after a time-out if it encounters an orphaned commit record. If no such client exists and a larger time-out expires, any client in the system can reconstruct local views of each object in the read set synced up to the commit offset and then check for conflicts. vCorfu presents itself as an object store to applications. Developers interact with objects stored in vCorfu and a client library, which we refer to as the vCorfu runtime, provides consistency and durability by manipulating and appending to the vCorfu stream store. Today, the vCorfu runtime supports Java, but we envision supporting many other languages in the future. The vCorfu runtime is inspired by the Tango runtime, which provides a similar distributed object abstraction in C++. On top of the features provided by Tango, such as linearizable reads and transactions, vCorfu leverages Java language features which greatly simplify writing vCorfu objects. Developers may store arbitrary Java objects in vCorfu, we only require that the developer provide a serialization method and to annotate the object to indicate which methods read or mutate the object, as shown in Figure 5.4. Like Tango, vCorfu fully supports transactions over objects with stronger semantics than most distributed data stores, thanks to inexpensive global snapshots provided by the log. In addition, vCorfu also supports transactions involving objects not in the run time’s local memory , opacity, which ensures that transactions never observe inconsistent state, and read-own-writes which greatly simplifies concurrent programming. Unlike Tango, the vCorfu runtime never needs to resolve whether transactional entries in the log have succeeded thanks to a lightweight transaction mechanism provided by the sequencer.Each object can be referred to by the id of the stream it is stored in. Stream ids are 128 bits, and we provide a standardized hash function so that objects can be stored using human-readable strings. vCorfu clients call open with the stream id and an object type to obtain a view of that object. The client also specifies whether the view should be local, which means that the object state is stored in-memory locally, or remote, which means that the stream replica will store the state and apply updates remotely. Local views are similar to objects in Tango and especially powerful when the client will read an object frequently throughout the lifespan of a view: if the object has not changed, the runtime only performs a quick check call to verify no other client has modified the object, and if it has, the runtime applies the relevant updates. Remote views, on the other hand, are useful when accesses are infrequent, the state of the object is large, or when there are many remote updates to the object – instead of having to playback and store the state of the object in-memory, the runtime simply delegates to the stream replica, which services the request with the same consistency as a local view. To ensure that it can rapidly service requests, the stream replicas generate periodic checkpoints. Finally, the client can optionally specify a maximum position to open the view to, which enables the client to access the history, version or snapshot of an object.

Consistency is a highly desirable property for distributed systems

As a result, CH4 production made up only 5% of net GHG emissions at Compost PAP and 48% of those from Compost CH. This finding suggests that aeration during the thermophilic phase of composting is critical in minimizing the GHG footprint of EcoSan systems. Waste treatment ponds produced anaerobic conditions that generate high levels of CH4 and very little CO2. The GHG contribution of waste stabilization ponds can be mitigated by the use of CH4 gas capture and electricity generation. Anaerobic digestion coupled to CH4 capture has been used to treat livestock manure and in some wastewater treatment plants for decades. However, many waste stabilization ponds throughout the world, including those sampled in this study, do not capture and reuse the CH4 generated during waste treatment. Market barriers – including the initial financial investment costs CH4 capture technology and electricity generation facilities – regulatory challenges, and lack of access to technology severely limit its widespread adoption in regions of the world that currently lack basic sanitation needs. Further, the efficiency of pathogen removal in waste stabilization ponds is highly variable , thereby limiting the effectiveness of waste stabilization ponds in regions of the world with limited technological and capital resources. Nitrous oxide is produced during the microbial-mediated processes of nitrification and denitrification, and can be produced in conditions with high to low levels of oxygen. Nitrification, the conversion of ammonium to nitrate through microbial oxidation, requires a source of ammonium and oxygen. During nitrification, N2O can form by the nitrate reductase enzyme in anaerobic conditions. Denitrification, the reduction of nitrate to dinitrogen through a series of intermediates, requires a source of nitrate, organic carbon, and limited oxygen.

Nitrous oxide can form as a result of incomplete denitrification to N2. Human waste contains organic carbon and a range of organic and inorganic forms of nitrogen. Therefore,vertical growing towers the oxygen conditions of a particular waste treatment pathway are a major control on N2O fluxes. In the anaerobic waste stabilization ponds, N2O was undetectable. In municipal wastewater treatment plants, measurements of N2O vary widely and can be mitigated by technologies that remove total nitrogen. Grass fields where waste was illegally disposed exhibited high and spatially variable N2O and CH4 fluxes. We observed a trade off between N2O and CH4 across sanitation pathways. Whereas waste stabilization ponds produced high levels of CH4 and no N2O, both EcoSan systems tended to have high fluxes of N2O. Nitrous oxide in compost piles can be produced by both nitrification and denitrification processes present along oxygen, moisture and C:N gradients within the pile. Reducing occurrences of anaerobic microsites could further limit N2O production from EcoSan compost, however, N2O production could still results from nitrification conditions. Despite this pollution swapping and taking into account the greater global warming potential of N2O, the largest contributor to GHG emissions from these systems is still CH4. Therefore, without systems in place to capture and oxidize CH4, the aerobic EcoSan system is a favorable system relative to waste stabilization ponds and illegal disposal on grass fields with respect to its impact on the climate.The management of aerobic biogeochemical conditions in compost piles plays a key role in minimizing CH4 and N2O losses. We observed large differences in CH4 emissions, and consequently in overall GHG emissions, across the two EcoSan systems in our study, implying opportunities for improved management. We tested this explicitly with a targeted comparison of GHG emissions above two piles, one with a permeable soil lining and one with an impermeable cement lining, at the Compost CH site and with a second comparison of GHG emissions before and after turning pile material. We found that CH4 emissions from the cement lined pile were approximately four times greater than from the soil lined pile, despite no significant temperature or CO2 emission differences.

This is evidence that higher CH4 emissions were driven by a larger methanogenic fraction , expressed as the amount of CH4 emitted per unit CO2, in the lined pile, indicating a greater prevalence of anaerobic conditions due to higher pile moisture. The cement-lined pile in the paired-pile experiment had no drainage mechanism and therefore likely represents a high endmember for wet pile conditions and high CH4 emissions. Notably, the standard design for cement-lined piles at the Compost CH4 site includes a lateral overflow PVC pipe, providing passive drainage, while at Compost PAP a soil lining is used without a PVC drain, and in both cases the CH4 emissions observed were much lower. The very high CH4 emissions from the undrained pile therefore likely reflect a very high moisture end-member for thermophilic composting. For future EcoSan implementations there are important trade offs to consider in pile design. The advantages of a PVC drain and associated storage tank are that potentially pathogenic liquid is contained, can be recycled to maintain optimal pile moisture levels under drier conditions and, if sanitized, the nutrient content of the leachate can be recycled. In contrast, a soil floor costs less, but it is important to consider, and monitor for, the potential leaching of pathogens, nutrients that can causes algalblooms, and trace metals that could contaminate drinking water when using a permeable floor. Future studies should further explore the quantity, composition, and timing of pile leaching, and assess the efficacy of soil as a filter to avoid contamination of groundwater alongside lowering GHG emissions. Though use of a permeable soil floor and/or PVC overflow drain showed potential to reduce EcoSan composting GHG emissions, the effects of turning the pile e even once e were even greater. Emissions of CH4 dropped two orders of magnitude, approaching zero, within one day after turning and stayed comparably low through the third day. Piles in the EcoSan second stage are turned every 7e10 days , therefore it is likely that CH4 emissions remain low throughout this entire phase, as originally evidenced by the >3-month time points in the initial measurements at Compost CH and Compost PAP. From these results, it may appear to be beneficial from a climate forcing perspective to reduce the time spent in the first static phase, however this must be balanced by the need to safely manage the pathogen burden at this early treatment stage, especially if piles are turned using manual labor.

Turning must only begin when pathogen abundance in the material has been reduced to a safe level, thus safeguarding the health of employees and local environment. Furthermore, though not observed in this study, past work has also shown that pile turning can increase N losses. Significant spikes in ammonia and N2O emissions follow mechanical turning of composting manure. It is therefore possible that within EcoSan composting there may be a trade-off between N2O and CH4 emissions between the initial static and later turned stages,container vertical farming similar to our observations across different sanitation pathways. Our gridded sampling scheme also allowed us to test the hypothesis that aeration drives CH4 emissions within piles. The results confirmed the utility of our model-based sampling design, with mean CH4 emissions four to five times higher from pile centers than pile corners or edges, regardless of the general drainage characteristics of the pile. An alternative to early turning may be the use of additional engineering to further aerate the middle of large piles where, even under well-drained pile conditions, we observed steep increases in CH4 emissions. One solution may be use of perforated PVC pipes for passive aeration of the pile at relatively low cost. Thermophilic composting is most effective under aerobic conditions. Understanding how management can best support aerobic conditions provides a win-win opportunity to increase the operational efficiency composting for treating waste while reducing the associated GHG emissions. The preliminary comparisons in this study captured significant effects of pile lining permeability and pile turning on GHG emissions during thermophilic composting, and helped us interpret the longer-term dynamics of GHG emissions during composting. Although our targeted measurements identify two of the management controls of GHG differences , robust estimates of emission factors for EcoSan composting requires a more comprehensive assessment of GHG dynamics, considering different management options, and with more extensive sampling throughout the composting operational stages. In sum these results support the potential for EcoSan composting to further reduce CH4 and overall GHG emissions associated with waste containment and treatment if piles are carefully designed and effectively managed to support aerobic metabolism.Strong consistency guarantees simplify programming complex, asynchronous distributed systems by increasing the number of assumptions a programmer can make about how a system will behave.

For years, system designers focused on how to provide the strongest possible guarantees on top of unreliable and even malicious systems. The rise of the Internet and cloud-scale computing, however, shifted the focus of system designers towards scalability. In a rush to meet the needs of cloud-scale workloads, system designers realized if they weakened the consistency guarantees they provided, they could greatly increase the scalability of their systems. As a result, designers simplified the guarantees provided by their systems and weaker consistency models such as eventual consistency emerged, greatly increasing the burden on developers. This movement towards weaker consistency and reduced features is known as NoSQL. NoSQL achieves scalability by partitioning or sharding data, spreading the load across multiple nodes. In order to maintain scalability, NoSQL designers ensured requests were not required to cross multiple partitions. As a result, they dropped traditional database features such as transactions in order to maintain scalability. While this worked for some applications, many of the developers with applications which needed this functionality were forced to choose between a database with all the functionality they needed, or to adapt their applications to the new world of the relaxed guarantees provided by NoSQL. Programmers found ways around the restrictions of weaker consistency by retrofitting transaction protocols on top of NoSQL, or by finding the minimum guarantees required by their application. Chapter 2 explores this pendulum away from and back towards consistency. This dissertation explores Corfu, a platform for scalable consistency. Corfu answers the question: “If we were to build a distributed system from scratch, taking into consideration both the desire for consistency and the need for scalability, what would it look like?”. The answer lies in the Corfu distributed log. Chapter 3 introduces the Corfu distributed log. Corfu achieves strong consistency by presenting the abstraction of a log – clients may read from anywhere in the log but they may only append to the end of the log. The ordering of updates on the log are decided by a high throughput sequencer, which we show can handle nearly a million requests per second. The log is scalable as every update to the log is replicated independently, and every append merely needs to acquire a token before beginning replication. This means that we can scale the log by merely adding additional replicas, and our only limit is the rate of requests the sequencer can handle. While Chapter 3 describes how to build a single distributed log, multiple applications may wish to share the same log. By sharing the same log, updates across multiple applications can ordered with respect to one another, which form the basic building block for advanced operations such as transactions. Chapter 4 details two designs for virtualizing the log: streaming, which divides the log into streams built using log entries which point to one another, and stream materialization, which virtualizes the log by radically changing how data is replicated in the shared log. Stream materialization greatly improves the performance of random reads, and allows applications to exploit locality by placing virtualized logs on a single replica. Efficiently virtualizing the log turns out to be important for implementing distributed objects in Corfu, a convenient and powerful abstraction for interacting with the Corfu distributed log introduced in Chapter 5. Rather than reading and appending entries to a log, distributed objects enable programmers to interact with in-memory objects which resemble traditional data structures such as maps, trees and linked lists. Under the covers, the Corfu runtime, a library which client applications link to, translates accesses and modifications to in-memory objects into operations on the Corfu distributed log. The Corfu runtime provides rich support for objects.

Cross peaks in the CP-PDSD experiment represent rigid dipolar

Examining the reorganization of the secondary plant cell wall polymers due to mechanical preprocessing is important for the development of the plant cell wall model and effective utilization of biomass without recalcitrance.Conversion from plant biomass to bio-product often necessitates mechanical preprocessing in deconstruction methods; milling times vary but can be as short as 2 min and can exceed 4 hours.Milling of stem tissue at 30 Hz for 2 min was selected to allow direct comparison to DMSO swelling studies employing the same milling time.Typically plant cell wall samples are milled at 30 Hz for 2 minutes at 30 Hz followed by up to 24 hours of milling at 10 Hz depending on the amount of material.As a result, these experiments report on cell wall structure after reorganization of the plant cell wall polymers that occur during mechanical preprocessing.For example, solid-state NMR measurements on maize biomass after mechanical and solvent processing methods support lignin association with the surface of hemicellulose coated cellulose fibers in the cell wall,61 a different result than those obtained from recent solid-state NMR measurements on less processed grass and other plant species biomass.CO2 labeling is challenging for large mature plants such as Poplar trees given labeling chambers would need to adapt over the life cycle of the organism,vertical farming companies and there are few commercial sources.In this current study, the results of mechanical processing on 13C sorghum data collected at common laboratory milling times are available for native and milled stems to contrast the tissue with the highest amount of secondary plant cell wall to boost sensitivity solid-state NMR recalcitrance markers.

Comparisons are limited to changes in signal intensity signal through the integrated comparison of peaks found in the control to the milled samples for the CP-PDSD experiment, CP-rINADEQUATE, and the rINEPT.Polymers are evaluated across the secondary plant cell wall structure by combining the CP based experimental observations which examine highly rigid components and the rINEPT highly dynamic components of the sample.This current work also highlights initial characterization of highly dynamic lignin and hemicellulose polymers of the plant cell wall matrix in the rINEPT for the first time.However, the sorghum stems milled for 15 minutes will be compared to previous work by Ling et al.2019 which shows that cotton crystallinity of cellulose decreases by an average of 40% over 13 techniques.This is important because cotton is a naturally pure form of cellulose 5,38 and results from milling cellulose in the secondary plant cell wall can be observed in the sorghum stems milled for 15 minutes.For simplicity, the results are ordered by polymer class presented in Figure 2B: cellulose fibrils, structural hemicellulose, and lignin.FE-SEM images of the control and milled samples in Figure 6 show the morphological structure of the cellulose fibrils in their bundled cylindrical form as lines.Cellulose fibrils are typically on the order of 1-2 μm in diameters for a scale reference,which is on the order of some milling efforts of softwood “fines”.In Figure 6A the cellulose fibrils are shown in their bundled structure to the right in Figure 6B of the stem vasculature which can be seen as the large main cylindrical shape with thin cellulose fibrils forming lines across.After milling, the vascular structures comprising the stem are lost , this makes sense as the macroscopic stem structure is homogenized into a paste upon milling.However, the cellulose fibril structure remains after 2 minutes and 15 minutes of milling.Contrasting the control and stems milled for 2 minutes at 30 Hz, the individual cellulose fibers within the sample are largely similar, with a slightly rougher texture.After 15 minutes of milling stems at 30 Hz, there is noticeable fraying of the cellulose fibril bundles within Figure 6E, the thinner cylindrical lines appear to be consistent with microfibrils.

Very different results appear after milling cellulose fibrils for 15 minutes at 30 Hz in this study with previous work on milled cotton.Here for sorghum, more intact fibril morphology is maintained when milling the cellulose fibrils within a second plant cell wall.In the Ling et al.2019 study, cotton cellulose was fractured to the point microfibril structures were obscured and fibril chunks remained.There was qualitatively less severe cracking on the surface of fibrils in Figure 6E in the plant cell wall sorghum sample in this study and general fibril shapes are still apparent.In this current study, initial morphology of the intact cellulose fibrils in the plant cell wall was approached with FE-SEM and other techniques which may be considered for future evaluation of cellulose fibril structure within the heterogeneous plant cell wall during deconstruction for sorghum stems.Other techniques may yield further information related to recalcitrance due to lowered accessible cellulose fibril surfaces available for digestion.In the future, one class of attractive microscopies is vibrational microscopy for verifying cellulose including crystalline and amorphous cellulose.However, implementing these techniques suffer from the complexity of an intact plant cell wall and cellulose fluorescence.The benefit of microscopy over spectroscopy for assessing cellulose in the plant cell wall is the arrangement of the polymer in cellulose fibrils which vary in orientation and direction so techniques which can focus on one cellulose fibril at a time are favorable.Techniques such as Confocal Raman Spectroscopy with a 785 nm or 1064 nm lasing source or AFM-IR would both require optical arrangements suitable for detection of cellulose signals between 3000-400 cm-1 for informing on the cellulose fibril structure and on lignin or hemicellulose on cellulose fibril surfaces relevant for recalcitrance.

Vibrational microscopy would also have the advantage of confirming cellulose as the cellulose fibril structure is obscured in deconstruction.However for the scope of this current study includes FE-SEM structures were confirmed using literature98 and the assertion of cellulose predominance in the plant cell wall.For sorghum secondary plant cell walls subjected to vibratory milling, recalcitrance would be supported by the correlated decrease of crystalline cellulose and an increase in rigid amorphous cellulose.Such details can be extracted experimentally on the cellulose fibril structure using 2D solid-state NMR.Molecular insights specific to the constitution and state of the cellulose fibrils was assessed with CP-PDSD experiments with a 1500 ms and 30 ms mixing times to assess crystallinity.Peaks in the CP-PDSD were assigned using previously characterized peaks for polymers in the sorghum secondary plant cell wall Gao et al.2020and signals consistent with the CP-rINADEQUATE experiment for each dimension.The CP based experiments filter for more rigid components of the secondary plant cell wall because the 1H-13C magnetization transfers are more efficient for rigid spins.The mixing times of the CPPDSD has a proportional timescale to the distance of the spins between 13C-13C through space.The downside of the experiment is the broader line shapes due to heterogeneous line broadening from spins in multiple orientations coupling at similar frequencies,vertical garden indoor but crucial information about the cellulose fibril morphologies can still be obtained.The 1500 ms mixing period for the 1500 ms 2D CP-PDSD experiment reports on the larger cellulose structure along fibrils and between fibrils.The 30 ms CP-PDSD 30 experiment has a mixing time of 30 ms where there is enough time for the magnetization transfers to pass between the carbons within the glucose sugar of the monomer of the cellulose fibrils in Figure 8.First examination of the cellulose fibril shows magnetization transfers between D-glucose monomers of cellulose polymers in the 1500 ms CP-PDSD experiment.Cellulose carbon 1 to carbon 2 transfers have the same chemical shifts across both amorphous and crystalline cellulose.

The reductions in overall signal intensity considering sample load is negligible after 2 minutes of milling and >88% after 15 minutes of milling.This makes sense as cellulose fibrils appear to be broken down into smaller microparticle fragments.According to the FE-SEM images there may be fewer dipolar coupling-based magnetization transfers available along and between cellulose polymers after 15 minutes of milling.The proportional intensity changes between amorphous and crystalline cellulose signals of carbon 1 to 4 and carbon 1 to 6 magnetization transfers were assessed in the CP-PDSD experiments to identify the conversion of crystalline cellulose to amorphous cellulose.The proportion of the crystalline to amorphous cellulose for the glucose carbon 1 to carbon 4 transfer appears to decrease more for the 2-minute milling period, to 99.70 ± 1.59% and 82.17 ± 0.88% respectively of the relative peak intensity before milling.The signal intensity for the crystalline cellulose and amorphous cellulose appear to be nearly equal for the glucose carbon 1 to carbon 4 after 15 minutes of milling.The cellulose carbon 4 is of particular relevance as the amorphous cellulose has a chemical shift around 84 ppm and the crystalline cellulose has a chemical shift around 89 ppm.The isolation of cellulose carbon 4 in solid-state NMR spectra makes it a more reliable marker for amorphous and crystalline cellulose because they have less overlap than other peaks.After 2 minutes of milling the crystalline cellulose content appears higher than the amorphous cellulose content and the trend appears to also hold true for the 15 minute period.The reduction of amorphous cellulose signal may be due to amorphous cellulose becoming more mobile, resulting in less efficient CP transfer necessary for the 1500 ms CP-PDSD experiment.However, Ling et al.2019 found that even within the 1D CP experiments a conversion from crystalline to amorphous cellulose was observable as part of their prediction: crystalline cellulose within cellulose fibrils becomes amorphous upon milling.The conversion of crystalline to amorphous cellulose observed after milling was not consistently observed with sorghum.The ratio of crystalline cellulose to amorphous remains the same in 1500 ms CP-PDSD experiments as larger fibril structures are broken down in the milling process.The proportional intensities of crystalline and amorphous cellulose signals for carbon 1 to 4 remained the same after milling consistently for 2 minutes and 15 minutes at 30 Hz.The hypothesis of cellulose increasing recalcitrance was not supported as demonstrated in unambiguous carbon 1 to 4 of cellulose peaks for crystalline and amorphous cellulose.The 1500 ms CP-PDSD carbon 1 to carbon 6 transfers provide similar insight.The proportion of the crystalline to amorphous cellulose for the glucose carbon 1 to carbon 6 transfer appears to decrease more for the 2-minute milling period, to 90.47 ± 0.90% and 87.38 ± 0.88% respectively.The signal intensity for the crystalline cellulose and amorphous cellulose appear to be nearly equal for the glucose carbon 1 to carbon 6 after 15 minutes of milling.Both the carbon 4 and carbon 6 regions highlighted in Figure 7A–B appear to be low in signal intensity and it is worth noting the superposition of the noise over weak, broad peaks could distort the integrations so careful interpretation is necessary.Although the stems milled for 15 minutes show rigid cellulose within the fibril has greater amorphous cellulose than crystalline cellulose , the low signal intensity makes this observation somewhat ambiguous.The lower overall signal intensity of the stems milled for 15 minutes means the noise is superimposed over the tops of the peaks, compounding the error in these results.This factor is particularly relevant for the sample milled for 15 minutes given the signal intensity decreases by at least 80% for all peaks.For this purpose, over interpretation may be a liability when assessing recalcitrance using carbon 6 signals of cellulose in the 2D CP-PDSD experiments where cellulose carbon 4 chemical shift changes provide more information on recalcitrance due to morphology changes from crystalline to amorphous cellulose.Where the 1500 ms 2D CP-PDSD can give some insight into the larger cellulose structure, the 30 ms 2D CP-PDSD experiment reports on the D-glucose subunit of the cellulose polymer.Similarly, the 30 ms CP-PDSD experiment showed an overall decrease in cellulose structures was negligible after 2 minutes and >86% after 15 minutes of milling.When signal intensity is severely reduced the interpretation of integrations is less reliable due to noise super imposed over the tops of peaks.For this study, the 2DPDSD experiments, interpretations of carbon 1 to 4 peaks are more reliable than the carbon 1 to 6 signals.The proportion of the crystalline to amorphous cellulose for the glucose carbon 1 to carbon 4 transfer appears to change more for the 2-minute milling period to 101.53 ± 1.69% and 87.91 ± 1.00%, respectively.The signal intensity for the crystalline cellulose and amorphous cellulose appear to be nearly equal for the glucose carbon 1 to carbon 4 after 15 minutes of milling.

The goal was to identify crop varieties that have high salt tolerance

More generally, on the time scale of centuries, marine transgression may cause rapid salinization of entire aquifers.In Western Europe Holocene transgressions of a few thousands of years have brought salt water of corresponding age to a depth of over 200 m.Nevertheless, at many places all around the world fresh and brackish waters have been found on the continental shelves.Numerical modeling by Post and Simmons illustrates how low-permeability lenses protect fresh water from mixing with downward invading overlying saline ocean waters with higher density.Van Duijn et al.gave a general, modern stability analysis of such density stratified flows below a ponded surface.Saltwater intrusion by tides in the mouths of rivers—The Zuiderzee Works and Delta Plan stopped salinization from tidal motion in the North.In the Southwestern Delta, tidal motion was only partly eliminated and no major freshwater reservoirs are available, like the Lakes IJssel and Marken for the northern provinces.Instead, fresh water supply in the southwest comes more directly from diversions of water from the major rivers.In the 20th century the quality of the Rhine water gradually deteriorated, until a series of international treaties brought improvement.The river water quality was further reduced by an inward directed flow of high-density saline water underneath the outward directed flow of lighter runoff water.Traditionally, the tides had free play and salinized the river water far inland,vertical farm particularly in periods of low river flows.As a result of this salinization, in the 1970s the surface water in the important West land greenhouse district between Rotterdam and the Hague was hardly suitable for use as irrigation water.

The growers themselves made it even worse using drainage return flows, resulting from high leaching fractions combined with high application of fertilizers.The RAND corporation did a policy analysis of water management for the Netherlands , balancing engineering ambitions and agricultural interests, specifically regarding the desired irrigation water quality for use in greenhouse horticulture.The Delta Works have provided some relief from saltwater intrusion in river mouths; however, conflicting agricultural and environmental interests continue to dominate the discussion about seawater blockage as related to the desire to maintain brackish aquatic ecosystems.Saltwater intrusion by inward flow of water to land below sea-level—Fig.32 shows the depth of the brackish-fresh interface in the coastal regions of the Netherlands.Similar maps are available for the coastal region of Belgium.Because fresh water is floating on top of saline groundwater in the dunes area along the west coast, saline intrusion is strongest in the North and Southwest, where coastal dunes are absent.At numerous locations in the dunes, fresh dune water is pumped as a source for preparing drinking water for the western part of the country, where the groundwater is too saline because of continued saltwater intrusion.For example, a dune area of 3400 ha along the western coast supplies fresh drinking water to Amsterdam, already since 1853.To keep the floating bodies of fresh water in the dunes intact, the freshwater pumping is compensated for by excess rainfall and infiltration of river water, partly after having been stored in the Lakes IJssel and Marken.Fresh water floating on top of salt water in agricultural fields—Recently fresh water lenses floating on top of saline groundwater have been fully recognized as being of great importance, not only in the dunes, but also in farmer fields along coastal regions where upward seepage of saline groundwater occurs.

These freshwater lenses can come from rain, melted snow, and increasingly also from irrigation of agricultural lands.Eeman et al.made a detailed analysis of the thickness of a freshwater lens and the transition zone between this lens and the up welling saline water.Starting from a fully saline condition between drains or ditches and assuming constant rates of saltwater up welling and freshwater infiltration, they showed that a freshwater lens will grow until it reaches a maximum size.Moreover, they concluded that the fresh/saline ratio of the drainage water will change from zero to the infiltration/upward seepage ratio.However, as shown by others , seasonal variations of infiltration and plant root withdrawal of fresh water will cause temporal fluctuations of the thickness of the lenses and the fresh-saline ratio of the drainage water.Salt tolerance in a generally humid and cool climate—Most salt tolerance data for field crops and flower species date from before 2000 and were reviewed by Van Bakel et al.and Stuyt et al..The latter compilation in Dutch is the most complete, providing salt tolerance thresholds for35 individual crops or groups of crops.Salt tolerance data for greenhouse horticultural crops were brought together by Sonneveld and Sonneveld and Voogt,and included interactions between plant nutrition and salinity.In the last decade, salt tolerance tests have been carried out at Salt Farm Texel.The 160 m2 experimental plots were irrigated, using eight replications of seven different salt concentrations, obtained by mixing saline seawater with fresh water.Because of the high hydraulic conductivity of the soil, it was possible to maintain the desired concentration throughout the root zone, irrespective of the weather in the growing season.Salt tolerance was tested for six crops: potato , carrot , onion , lettuce , cabbage , and barley.The data were analyzed using the Maas and Hoffman and Van Genuchten and Gupta models.

An alternative model based on the Dalton-Fiscus model for simultaneous uptake of water and solutes was explored by Van Ieperen.Salinization in the countries around the North Sea—In principle, the lowland coastal regions of Belgium, Germany, the Netherlands, Sweden, and the United Kingdom face similar threats from salinity as in the Netherlands.For example, there was widespread flooding of farmland along the UK east coast during the Southern North Sea storm of December 5, 2013.Due to different economic and political priorities, the responses to such events have varied.The Netherlands was saved potential disastrous flooding in 2013, thanks to the Delta Plan response to the 1953 Storm Flood.Gould et al.analyzed the impact of coastal flooding on agriculture in Lincolnshire, UK.They noted that flood risk assessments typically emphasize the economic consequences of coastal flooding on urban areas and national infrastructure and tend to omit the long-term impact of salinization of agricultural land.Considering this long-term salinization, they calculated financial losses ranging from £1366/ha to £5526/ha per inundation, which would be reduced by between 35% up to 85% by post-flood switching to more salt-tolerant crops.Egyptians have practiced irrigated agriculture for about 5000 years in the Nile River valley, using basin irrigation dependent on the rise and fall of flows in the Nile river.Since 3000BCE, the Egyptians used to construct earthen banks to form flood basins of various sizes, filled with the Nile water to saturate soils for crop production.Egyptian irrigated agriculture has been sustainable for thousands of years,nft vertical farming in contrast to other civilizations in Mesopotamia.Reasons were provided by Hillel , pointing to the annual natural flooding that deposited nutrient-rich soil material,annual cycles of rising and falling of the Nile river that created fluctuations of the groundwater table and yearly flushing of salts of its narrow irrigated flood plains, and the annual inundations that occurred in the late summer and early fall, after the spring growing season.With the construction of the Aswan High Dam, most of the land was converted to perennial irrigation and the irrigated area increased from 2.8 to 4.1Mha.The year-around irrigation and lack of leaching by annual pulsing of the Nile river triggered soil salinization.More than 80% of Egypt’s Nile water share is used in agriculture.Water- saving in agriculture is a major challenge because annual per capita water availability in Egypt is expected to decrease to 560m3 from a current level of 950m3.The salts of the Nile basin are either of intrinsic origin, sea water intrusion or from irrigation with saline groundwater.Since the climate of Egypt is characterized as arid with annual rainfall ranging from 5 to 200mm compared to evaporation rates of 1500–2400mm, crop production is not possible in most parts of Egypt without irrigation.Salinity problems in the irrigated areas are widespread and about 1 million ha are already affected.At present only 5.4% of the land resources in Egypt is of excellent quality, while about 42% is relatively poor due to salinity and sodicity problems.Soils in the Nile valley and the Delta are Vertisols, characterized by substantial expansion by wetting and shrinking by drying.In Egypt, productive lands are finite and irreplaceable and thus should be carefully managed and protected against all forms of degradation.Other countries of the Nile basin also have salinity problems.Kenya has about 5Mha of salt-affected lands.In Tanzania, about 30% area is characterized by poor drainage and soil salinity problems.The soil salinity problems in countries such as DR Congo, Uganda, Burundi, and Rwanda are less prevalent however soils are low in fertility.The salt-affected lands in South Sudan and Sudan are in the White Nile irrigation schemes.This area has hardly been utilized for agricultural production despite having great potential due to the availability of water from Nile.In other parts of South Sudan, low soil fertility and lack of good quality seeds for crops and forages are the major bottlenecks in the development of agriculture.

Ethiopia stands first in Africa in the extent of salt-affected soils with an estimated 11Mha of land exposed to salinity.This corresponds to 9% of the total land area and 13% of the irrigated area of the country.These soils are concentrated in the Rift Valley, Wabi Shebelle River Basin, the Denakil Plains and other lowlands and valleys of the country, where 9% of the population lives.Currently, soil salinity is recognized as the most critical problem in the lowlands of the country resulting in reduced crop yields, low farm incomes and increased poverty.The insufficient drainage facilities, poor-quality groundwater for irrigation and inadequate on-farm water management practices are usually held responsible for the increasing salinity problems.Despite the widespread occurrence of salt-affected soils, Ethiopia does not have an accurate data base on the extent, distribution, and causes of salinity development.Most of the saline soils are concentrated in the plain lands of the Rift Valley System, Somali lowlands in the Wabi Shebelle River Basin, the Denakil Plains and various other lowlands and valley bottoms throughout the country.The introduction of large-scale irrigation schemes without the installation of appropriate drainage systems have also resulted in the rapid expansion of soil salinity and sodicity problems in the lower Wabi Shebelle basin of Gode.The distribution of surface salinity in the four largest regions of Ethiopia is given in Table 5.Sudan has built four dams on the Nile during the last century to provide irrigation water to an additional 18,000 km2 of land.This has made Sudan the second most extensive user of the Nile river water, after Egypt.Despite these arrangements, Sudan has not achieved full production potential due to lack of water infrastructure for equitable water distribution among farmers, lack of farm inputs and low soil fertility conditions.In Egypt, about 85% of the available water resources are consumed by the agriculture sector.The completion of Aswan dam increased the intensity of irrigation, which created water logging problems in many parts contributing to the pollution of land and water resources.In Egypt, surface and subsurface drainage systems have been installed to control rising water tables and soil salinity.Besides, crop-based management is used to combat soil salinization.Farmers were encouraged to use agricultural drainage water to irrigate crops thereby reducing disposal problems.However, the unregulated application of drainage water for irrigation has reduced crop yields and polluted soil and water resources.In addition to agricultural chemical residues and salts, drainage waters include treated and untreated domestic wastewater.The use of organic amendments and the mixed application of farmyard manure and gypsum was useful in reducing soil salinity and sodicity.Recently, phytoremediation or plant-based reclamation has been introduced in Sudan, for example to reduce soil sodicity instead of using gypsum.In the absence of surface and subsurface drainage systems, farmers in Ethiopia continue to manage salt-affected soils by adopting traditional salt management solutions.These include:direct leaching of salts,planting salt-tolerant crops,domestication of native wild halophytes for agropastoral systems,phytoremediation,chemical amelioration, and the use of organic amendments such as animal compost.Farmers have also used various drainage designs, allowing salts to settle before its reuse for irrigation water.However, all such practices have failed to mitigate salinity problems in the long-term.Hence crop yields continue to decline, resulting in reduced farm incomes, food shortage and increased poverty.Many of the smallholder farmers are also working as daily laborers, causing unprecedented farmer migration to nearby urban areas and exacerbating prevalent problems of urban unemployment.The increasing demand for food for the rising population in Egypt , the country is trying to expand its irrigated agricultural area.

There is general evidence of reduced P uptake in salt affected soils

As pointed out in this review of the role of microorganism to mitigate abiotic plant stresses, their use can open new and emerging applications in agriculture and also provide excellent models for understanding stress tolerance, potentially to be engineered into crop plants to cope with abiotic stresses such as soil salinity.In another study by Marks et al.it was demonstrated that dramatic changes in salinity of salt marsh soils as caused by storm surges or freshwater diversions can greatly affect denitrification rates, which is especially relevant for nutrient removal management of eutrophic waters such as for the Mississippi delta.Rath et al.studied such dynamic conditions by the bacterial response to drying-rewetting in saline soils and concluded that increased soil salinity prolonged the time required by soil microbes to recover from drought, both in terms of their growth and respiration.Biochar is defined as organic matter that is carbonized by heating in an oxygen-limited environment.The properties of biochar vary widely, dependent on the feed stock and the conditions of production.Biochar is relatively resistant to decomposition compared with fresh organic matter or compost, and thus represents a long-term carbon store.Biochar stability is estimated to range from decades to thousands of years,ebb flow but its stability decreases as ambient temperature increases.It has been shown that application of biochar to soil can improve soil chemical, physical and biological attributes, enhancing productivity and resilience to climate change, while also delivering climate-change mitigation through carbon sequestration and reduction in GHG emissions.

Chaganti et al.evaluated the potential of using biochar to remediate saline–sodic soils in combination with various other organic amendments using reclaimed water with moderate SAR.Results showed that leaching with moderate SAR water was effective in reducing the soil salinity and sodicity of all investigated soils, irrespective of amendment application.However, it was shown that combined applications of gypsum with organic amendments were more effective to remediate saline–sodic soils, and therefore could have a supplementary benefit of accelerating the reclamation process.Akhtar et al.used a greenhouse experiment to show that biochar amendment for a different soil salinity levels could alleviate the negative impacts of salt stress in a wheat crop through reduced plant sodium uptake due to its high adsorption capacity, decreasing osmotic stress by enhancing soil moisture content, and by releasing mineral nutrients into the soil solution.However, it was recommended that more detailed field studies must be conducted to evaluate the long-term residual effects of biochar.The application of marginal waters to augment irrigation water supplies particularly has led to investigations to evaluate plant nutrient uptake impact of saline-sodic soils.It has been shown that soil salinity can induce elemental nutrient deficiencies or imbalances in plants depending on ionic composition of the soil solution, due their effect on nutrient availability, competitive uptake, transport, and partitioning within the plant.Most obviously, soil salinity affects nutrient ion activities and produces extreme ion ratios in soil solution.

As a result, for example, excess Na+ can cause sodium-induced Ca2+ or K+ deficiency in many crops.Nutrient uptake and accumulation by plants is often reduced under saline soil conditions because of competition between the nutrient in question and other major salt species, such as by sodium-induced potassium deficiency in sodic soils.Soil salinity is expected to interact with nitrogen both as competition between NO 3 and Cl ions in uptake processes as high chloride concentrations may reduce nitrate uptake and plant development , and indirectly through disruptions of symbiotic N2 fixation systems.Interactions with phosphorus vary with plant genotype and external salinity and P concentrations in soil solution, which are highly dependent on soil surface properties.Calcium magnesium and sulfur as well as micro-nutrients all interact with soil salinity, Na and one another.Imbalance of these elements cause various pathologies in plants including susceptibility to biotic stresses.Among potential alternative land uses of saline soils is their economic potential for biomass production using forestry plantations , as many tree species are less susceptible to soil salinity and sodicity than agricultural crops.A thorough review of the economic potential of bioenergy from salt-affected soils has been presented by Wicke et al..Using the FAO soil salinity database, they estimated that the global economic potential of biosaline forestry is about 53 EJy 1 , when including agricultural land, and to 39 EJ y 1 when excluding agricultural land.

Plantation forestry has been advocated to control dryland salinity conditions, with fast growing versatile Eucalyptus species to lower shallow groundwater tables, however, salinity/sodic stresses in the long-term prohibit significant economic returns.Much will depend on regional production costs.Studies have shown that biosaline forestry may contribute significantly to energy supply in certain regions, such as sub-Saharan Africaand South Asia, and has additional benefits of improving soil quality and soil carbon sequestration , thus justifying investigating biosaline forestry in the near future.Economic losses of productive land by salinization are difficult to assess, however, various evaluations have reported annual costs of US $250–500/ha , suggesting a total annual economic loss of US$30 billion globally.As pointed out by Qadir et al., a large fraction of salt-affected land is farmed by smallholder farmers in Asia and SSA, necessitating off-farm supplemental income activities, with others leaving their land for work in cities.Given that much of the projected global population growth is in those regions, prioritization of research and infrastructure investments to mitigate agricultural production impacts there is extremely relevant.A thorough analysis of the production losses and costs of salt-induced land degradation was done by Qadir et al., based on crop yield losses, however, they point to the need to also consider additional losses such as by unemployment, health effects, infrastructure deterioration, and environmental costs.Their calculations compared economic benefits using cost-benefit analysis of “no action” vs “action” for various case studies.A yield gap analysis by Orton et al.for wheat production in Australia showed that soil sodicity alone represented 8% of the total wheat yield gap, representing more than AUS $1 billion.In their sustainability assessment of the expanding irrigation in the western US, comparing real outcomes with those predicted by Reisnerin this book Cadillac Desert, Sabo et al.included an economic analysis of agricultural revenue losses as a result of the increased soil salinity for the western US.Using the USDA NRCS soil’s data base, and available crop salt tolerance information,greenhouse benches they estimated a total annual revenue loss by reduced crop yields of 2.8 billion US dollars.In all, land values of salinized lands depreciate significantly and incur huge economic impact, putting into question the sustainability of agricultural land practices that induce soil salinization.Australia is the world’s driest inhabited continent with an average annual rainfall of 420 mm with a high potential for the formation of salt-affected landscapes.Development of agricultural practices in Australia began after the European settlement and was widely adopted during 20th century.Earlier, the indigenous population found their food by hunting and foraging.They indirectly depended on soils for plant food, but they did so without soil management.The European settlers were unaware of the soil characteristics they had to work with.Salt has been accumulating in the Australian landscape over thousands of years through small quantities blown in from the ocean by wind and rain.In addition to mineral weathering, salt accumulation is also associated with parna, a wind-blown dust coming from the west and the south-west of the continent.

Many soils of the arid to sub-humid regions of Australia contain significant amounts of water-soluble salts, dominantly as sodium chloride.Their dense sub-soils are frequently characterized by moderate to high amounts of exchangeable sodium and magnesium , and are generally named duplex soils.Discussing the genesis and distributions of saline and sodic soils in Australia, Isbell et al.concluded that salts from a variety of sources have probably contributed to the present saline and sodic soils.In the early part of 20th century, the Australian government initiated a nation-wide soil survey with soil analysis.As early as the 1930s, soil surveys in the Salmon Gums district, Western Australia, found that salt accumulation in surface and subsoils occurred in more than 50% of the 0.25 million ha surveyed.These surveys also found that virgin areas had higher accumulations of salts in the upper meter than in vegetation-cleared areas for the major soil types.In one of his earlier observations in the Mallee region of Southern Australia, Holmes found a salt bulge that was more than 4 m below the surface in a virgin heath community.Northcote and Skene , examining numerous data relating to the morphology, salinity, alkalinity, and sodicity of Australian soils presented the areal distribution of saline and sodic soils in Australia, using the classification of salt-affected soils of Table 2.While 32.9% of the total area in Australia is salt-affected, sodic soils occupy 27.6% of this area.Hence, most of the research during the middle of the 20th century focused on sodic soils and their management.Northcote and Skene defined sodic soils as those having an ESP between 6 and 14, and strongly sodic soils as those having an ESP of 15 or more.The recent Australian soil classification defined “Sodosols”as soils with an ESP greater than 6.However, soils with ESP 25–30 were excluded from sodosols, because of their very different land-use properties.California’s natural geology, hydrology and geography create different forms of salinity problems across the state, ranging from sea water intrusion induced salinity along the central coast to concentration of salts in closed basins such as the Tulare Lake basin in the Central Valley.In addition, some of the most productive soils in California such as in the western San Joaquin Valley originate from ocean sediments that are naturally high in salts.Irrigation water dissolves that salt and moves it downstream or it isThe salinity in the Colorado river used for irrigation in the Imperial Valley is higher than that of surface water from the snow melt.Although salinity problems can be found in various locations around California as shown in Fig.22, historically the major salinity issues are found in the Western San Joaquin Valley and the Imperial Valley.A thorough review of the history of irrigation in California was presented by Oster and Wichelns.Today, California’s interconnected water system irrigates over 3.4 Mha of farmland.The Imperial Valley in southern California has experienced salinity problems for many decades, since the Colorado river was tapped for irrigation in the early 1900s.By 1918 salinity had forced approximately 20,234 ha out of production and damaged thousands more hectares.The rapidly deteriorating agricultural lands from salinization forced the Imperial irrigation District to construct open ditch drainage channels.However, due to high salinity in the Colorado river water, heavy soils and poor on-farm water management at the time, the drainage system did not prevent continued salinization of the Imperial Valley.To address the problem, partnerships between the federal government, and the Imperial irrigation district were formed in the early 1940s that resulted in installation of underground concrete and tile drainage on thousands of hectares of farms.The subsurface drainage system and improved on-farm water management led to a reduction in the rate of soil salinization, resulting in flourishing agricultural production in the Imperial Valley.The water from the subsurface drainage tiles was routed to the Salton Sea.However, agricultural runoff and drainage flows with high salt content have affected the elevation of Salton Sea and increased its salinity threatening various wildlife species.On the positive side, the salinity load coming into the Imperial Valley as measured by salinity levels at the Imperial dam have not increased as previously projected.A report from the US Bureau of Reclamation reported a flow weighted salinity of 680 mg/L in 2011 at the imperial dam and had remained constant for past decades.Another major region in California significantly impacted by salinity is the western San Joaquin Valley , comprising the southern half of the Central Valley.From the second half of the 19th century to the early 1900s the SJV experienced rapid development of irrigated agriculture, along with it came drainage and salinity problems.The salinity problems on the West side of the valley can be attributed to high water tables near the valley trough caused by an expansion of irrigated agriculture upslope from the valley,soils on the West side are derived from alluvium originating from coastal mountains and other marine environments, and degradation of water quality in the San Joaquin river.In 1951, some of the fresh water in the San Joaquin river was diverted to irrigate agricultural lands on the east side north of Friant dam.The diverted water was replaced with saltier water from the Central Valley project.These changes coupled with agricultural return flows led to increased salinity downstream of the San Joaquin river, the main conduit draining the valley.

Micro-irrigation systems are largely preferred when irrigating with more saline waters

They have been successfully used in orchards, vineyards, and vegetable crops in many regions around the world with salinity problems, including Australia, Israel, California, Spain, and China.They are well suited because of their use of high frequency irrigation, thereby preventing dry soil conditions so that soil solution salinities remain close to that of the irrigation water, especially in the vicinity of the emitters where root density is highest.The salt distribution that develops around a micro-irrigation system depends on system type, but typically salts concentrate on the periphery of the wetted bulb for a surface drip irrigation, whereas salt concentrations typically increase with soil depth for sprinkler systems.The upwards capillary movement of water from the wetted soil depth near the subsurface drip emitter results in soil surface salt accumulation as water is lost through root water uptake and soil evaporation.For conditions where seasonal rainfall is inadequate to push those salts near the soil surface further down, options include preseason flood irrigation or sprinkling, moving drip lines every so many years when replacing or change crop rows between seasons.However, anecdotal evidence in the San Joaquin valley orchards has shown that salinity around drip irrigation systems can limit the volume of the root zone thereby limiting nutrient uptake, particularly nitrogen.The residual nitrogen ends up being leached to groundwater either by excess irrigation or winter recharge causing environmental degradation of groundwater quality.The complex interactions between soil salinity stress and water and nitrate applications were discussed in a model simulation study by Vaughan and Letey.

Libutti and Monteleone suggested that since soil salinity management is bound to increase the leaching of N,hydroponic net pots best practices should optimize the volume of water needed to reduce salinity and that required to avoid or minimize NO3 contamination of groundwater.They suggested to “decouple” irrigation and fertigation.Abating this salinity-N paradox with coupled nutrient-salt management will requires site specific considerations.Because of the potentially high control of irrigation amount and timing, it has been shown by Hanson et al., that subsurface drip directly below the plant row can effectively be used for irrigation under shallow water table conditions as long as the groundwater salinity is low.They showed that converting from furrow or sprinkler to subsurface drip is economically attractive and can achieve adequate salinity control through localized leaching for moderately salt-sensitive crops such as processing tomatoes, eliminating the need for drainage water disposal if so relevant.Controlled drainage —Whereas conventionally drains are installed in conjunction with irrigation systems in arid regions, controlled drainage systems originate in humid regions by control of the field water table using more shallow depth drainage laterals and control structures in the drainage ditches or sumps.In controlled drainage systems, irrigation and drainage are part of an integrated water management system where the drainage system controls the flow and water table depth in response to irrigation.Depending on objectives of the CD system, it can reduce deep percolation and nitrate concentrations in drainage water, augment crop water needs by shallow groundwater contribution, and reduce drainage water volume and salt loads for disposal.

Use of marginal waters—When freshwater resources are limited, salt tolerant crops can be irrigated with more saline water to be reused, for example by treated wastewater or drainage water.Management options include to apply irrigation water that is a mixture of saline with fresh water or cycle saline water with fresh water depending on growth stage , by using crop rotations between salt sensitive and salt tolerant crops, depending on when more saline water is available or through the use of sequential cropping as described in Ayars and Soppe.In addition to reducing freshwater requirements, it decreases the volume of drainage water required for disposal or treatment.A series of articles that present use of marginal waters has been edited by Ragab.In general, research results in this issue demonstrate that waters of much poorer quality than those usually classified as “suitable for irrigation” can, in fact, be used effectively for the growing of selected crops under a proper integrated management system, as long as there are opportunities for leaching to prevent detrimental effects, such as by sodicity.Studies have shown that drip irrigation gives the greatest advantages, whereas sprinkling may cause leaf burn.Cycling strategies are generally preferred, but beneficial effects decreased under DI.In addition, blending does not require added infrastructure for mixing the different water supplies in the desired proportions.Precision agriculture is increasingly becoming an established farming practice that optimizes crop inputs by striving for maximum efficiencies of those inputs thus increasing profitability while at the same time reducing the environmental footprint of those improved practices.While farming has always been about maximizing yield and optimizing profitability, precision farming has allowed for differential application of crop inputs across the farmer’s field, leading to more sustainable management.PA became possible through the broad availability of global positioning system and geographical information system technologies with satellite imagery in the 1980s.

It was focused on achieving maximum yields, despite spatial variations in soil characteristics across agricultural fields.It enabled farmers to vary fertilizer rates across the field, guided by grid or zone sampling.Therefore, inherent to precision agriculture is the use and refinement of the field soil map, in combination with soil and/or plant sensors.Whereas early PA applications depended solely on the soil map and its refinement, more sophisticated approaches have been introduced because of the parallel development of on-the-go sensor technologies, allowing for real-time soil and/or plant monitoring during the growing season thus expanding PA toward spatio-temporal applications.For a review of a broad range of such on-the-go-sensors, we refer to Adamchuk et al., including electrical/electromagnetic and electrochemical sensors for soil salinity and sodium concentration measurements.Whereas specific electrode sensors are available to measure Na concentration in soil solution, most of the EM sensors were developed to indirectly measure soil moisture by correcting for salinity interference, or to measure bulk soil ECb.The sole exception is the porous matrix sensor that was originally designed by Richards and reviewed by Corwin, measuring directly the electrical conductivity of in-situ soil pore water through an electrical circuit with the electrodes embedded in a small porous ceramic element that is inserted in the soil.The EC measurement is solely a function of the solution salinity because the air entry value of the ceramic is such that it will not desaturate beyond 1 bar.Corrections are required for temperature and response time for ions to diffuse from the soil solution into the ceramic.In their synthesis of high priority research issues in PA, McBratney et al.addressed the need to consider temporal variations, as yields typically vary from year to year.For irrigation applications, knowledge of within season variations are critical for BMP’s that minimize crop water and salinity stress.This has led to the term and application of Precision Irrigation, adhering to the definition of PA but applied to irrigation practices.Whereas traditional irrigation management strives for uniform irrigation across the irrigated field, it is the goal of PI to apply water differentially across the field to account for spatial variation of soil properties and crop needs, thus to also minimize adverse environmental impacts and maximize efficiencies.Moreover, PI advances allows for temporal adjustments of irrigation during the growing season because of changing weather conditions,blueberry grow pot including accounting for rainfall.PI can adjust water/ fertilizer amounts because of differential tree/crop needs , by controlling both application rate and timing at the individual tree/crop level or for larger management units.PI uses a whole-systems approach, with the goal to apply irrigation water and fertilizers using the optimal combination of crop, water, and nutrient management practices.As defined by Smith and Baillie , precision irrigation meets multiple objectives of input use efficiency, reducing environmental impacts, and increasing farm profits and product quality.

It is an irrigation management approach that includes four essential steps of data acquisition, interpretation, automation/control and evaluation.Typically, data acquisition is achieved by sensor technologies, while data interpretation would occur by evaluating simulation model outcomes, e.g.of crop response and salt leaching.Control is achieved by automatic controllers of the irrigation application system using information from both the sensors and simulation models, whereas evaluation closes the loop through adjusting the PI system.In addition to electrochemical sensors such as specific electrodes, optical reflectance devices such as near- and mid-infrared spectroscopy methods have been developed to quantify specific soil ion concentrations, particularly soil nitrate content.Over the past 20 years or so, many new soil moisture and salinity sensors have come to market, most of them being able to be included in wireless data acquisition networks.Selected reviews and sensor comparisons include Robinson et al.and Sevostianova et al..Shahid et al.showed the field results of a real-time automated soil salinity monitoring and data logging system, tested at the ICBA Dubai Center for Biosaline Agriculture.Recently there has been increased use of geophysical techniques for delineation of PI irrigation zones and for in-season irrigation and soil salinity management.For example, Foley et al.demonstrated the potential of using ERT and EM38 geophysical methods for measuring soil water and soil salinity in clay soils although they emphasized the need for calibration.Whereas traditionally, one would consider only drip or micro-sprinkler irrigation as a PI method, the broader definition can apply to most pressurized irrigation methods.Specifically, Variable Rate Irrigation is applied to center pivot, lateral move, and solid set systems, as reviewed recently by O’Shaughnessy et al..Many of the aspects of PI equally apply to such sprinkler systems, however, it is noted that their inherent complexity has precluded the required development of user-friendly interfaces for decision support, lagging the engineering technology.Specifically, the need to fuse GIS, remote sensing, and other temporal information with the DSS, allowing management zones to change over the growing season.Recent evaluations on impacts of using VRI on crop yield, water productivity were presented by Barker et al.and Kisekka et al., showing potential improvements when using VRI or MDI , but that additional research is strongly advocated especially because of the significant increased investments required.Another limitation to date of adoption of PI is that large-scale VRI systems require many sensors which can be cost-prohibitive, whereas determining their placement and number of sensors needed is not straightforward.It is worth noting that PI can also be applied to surface irrigation systems as described in Smith and Baillie.For example, automated gates coupled with SCADA systems and real-time data analytics can be used to optimize flow rates, and advances times to ensure infiltration rates match variable soil conditions.The application of PI to maintain plant-tolerable soil salinity levels was introduced by Raine et al., identifying research priorities at the time that allows for PI to be effective and pointing out that the level of precision, water application uniformity and efficiencies of most irrigation practices is sub-optimal.Among identified knowledge gaps was the lack of agreement between field and model-simulated data, especially for multidimensional model applications such as required for drip irrigation and for spatially-variable salt and water distributions at the individual plant root zone scale.This puts into question the usefulness of computer modeling for soil salinity management purposes, especially if there is general absence of soil salinity measurements to validate model simulations.Another limitation of successful PI is the lack of information on crop root response to salinity when considering the whole rooting zone in multiple dimensions as well as on crop growth stage.A central component of a road map toward precision irrigation is moving from a single management point within an agricultural field toward defining management zones across the field and eventually close to a plant-by-plant level of resolution were appropriate.It requires cost-effective sensors, wireless sensing and control networks, automatic valve control hardware and software, real-time data analytics and simulation modeling, and a user friendly and visual decision support system.Many sensor types and technologies are being developed and are introduced for soil moisture sensing; however, few applications include soil salinity sensing in concert with soil moisture monitoring.For PI to advance further, there is great need for much improved and cost-effective multi-sensor platforms that combine measurements of soil salinity with soil moisture and nitrate concentration.For a recent review of contemporary wireless networks and data transfer methods, we refer to Ekanayake and Hedley , that includes the use of cloud-based databases with smart phone apps and web pages.

No change in copepod swimming behavior was observed to result from this treatment

Zooplankton were collected from the Bridge Tender Marina in Wilmington, North Carolina , using a plankton net . Samples were diluted in whole seawater, aerated, and used within 12 hours of collection. Under a dissecting microscope, individual calanoid copepods were selected using Pasteur pipettes and placed in beakers with bottoms made of Nitex mesh that were submerged in filtered and UV-treated seawater. Before experiments, copepods were dyed red to make the organisms easy to visualize in videos. To dye the plankton, the mesh beaker was submerged in a solution of Neutral Red for 20 minutes .To test the effect of copepod shape and drag without swimming behavior, dead copepods were used as prey. The copepods were selected and dyed as described above, then heat-shocked. To compare copepod swimming behavior with a smaller prey that does not escape, nauplii of Artemia spp. were hatched from frozen cysts by placing cysts in aerated, filtered seawater. Nauplii between 2-3 days old were selected using Pasteur pipettes, were housed in mesh bottomed beakers, and underwent the same dye treatment as the copepods.In some cases prey were captured on the far side of the observed tentacles. If a prey carried in the flow “disappeared” behind an illuminated tentacle and did not re-emerge, we assumed that it was captured. When this occurred, the tentacles were observed carefully in subsequent frames of the video and in every case the captured plankton became visible when the tentacles moved,hydroponic bucket the prey fluttered into view during peak velocities, or the prey washed off the tentacles.

In addition, aerial-view photos of each sea anemone in still water were taken directly after the experiment and captured plankton were noted. No discrepancies occurred between the total number of captured prey counted by the end of the experiment and prey observed on the tentacles once the experiment was complete. Predator-prey interactions were identified by the behavior of the prey . “Pass” described when prey passively swept by the anemone within the capture zone. “Avoid” described when a copepod actively changed trajectory with an escape jump to avoid contact with the predator . A “bump” described when prey passively bumped into a tentacle but continued without a capture or escape. “Escape” described when a copepod bumped into a tentacle then actively swam off . “Capture” described when prey bumped into a tentacle and was held by the anemone. Importantly, captured prey did not always lead to retention , so a final term “loss” was used to describe when prey would dislodge from the tentacle. The interactions “bump” and “escape” do not result in a capture so “loss” only refers to prey removed after a capture. The rates of predator-prey interactions were used to calculate efficiency. In Chapter 2, capture and trapping efficiency were calculated based on the proportion of encountered prey so that these values could be compared between predators with different feeding modes. In this Chapter, “retention efficiency” is defined as the proportion of captured prey that was retained so that we could compare the ability of the predator to hold onto prey that have different swimming behaviors. Since the duration of experiments was short relative to the average ingestion times for sea anemones , most captured and retained prey were not ingested during the videos. Therefore, the retention efficiency for sea anemones feeding on different prey alludes to feeding success but is not a confirmed measure of how much the predators consumed. Most of the zooplankton prey passed through the capture zone of a sea anemone without contacting the predator .

In weak waves, prey passively bumped into the predator, although live copepods came into brief contact with a sea anemone less than nauplii or dead copepods. In strong waves, the proportion of “bump” interactions increased for all prey types. Living copepods were able to avoid or escape the predator more in weak waves than in strong waves, but this difference was not significant. Nauplii and dead copepods do not actively avoid or escape from predators. Yet the proportion of predator-prey interactions that resulted in capture did not vary with exposure to stronger waves.The largest proportion of prey pass near the sea anemone without reacting . When solitary sea anemones preyed upon copepods, the prey avoided or escaped the predator more in weak waves than in strong waves. With a downstream predator, prey avoidance and escape swimming occurred less than in the same flow over solitary sea anemones, and increased in stronger waves, though not significantly . Predator-prey interactions between copepods and solitary sea anemones in still water were included to compare whether the differences in behavior over downstream sea anemones was due to slower flow conditions. In still water, the proportion of prey avoidance and escape responses were also low and increased as flow increased . The proportion of prey captured is not significantly different between solitary or downstream copepods, nor is it affected by increases in flow. Many studies of benthic suspension feeders test the effect of flow on feeding rate by animals in unidirectional flow with passive and uniform prey . Encounter rates increase with water velocity, which leads to higher ingestion rates. In this study, stronger waves led to increased encounter rates only for passive particles, such as dead copepods . For prey that swim and perform escape maneuvers, stronger waves did not significantly enhance encounter rates. In weak waves, sea anemones encountered copepod prey at higher rates than nauplii and dead copepods, which suggests prey swimming behavior affects variability of encounter rates.

The differences in how flow affected encounter rates for three prey types were not mirrored in capture or retention rates. For passive prey, more encounter rates with a benthic predator did not result in greater rates of capture. Copepods in weak waves encountered a predator at a higher rate than nauplii, but capture rates were similar, which indicates that the capture and subsequent retention of prey does not scale equally from encounter rates for prey with different behavior. Importantly, retention rates were low for both nauplii and copepods in both weak and strong flow regimes. Dead copepods represented the extreme range of retention rates since these prey were retained at high rates in weak waves, but were not retained at all in strong waves. The comparisons between rates of encounter and capture for prey with different swimming behavior suggests the importance of evasive responses in avoiding contact with a predator,stackable planters reducing passive bumps into predators, and jumping free after getting captured. The proportion of predator-prey interactions between nauplii and dead copepods were similar . Copepod avoidance might have reduced passive bumping into predators in weak waves, but the proportion of capture remained the same in weak and strong waves. Downstream sea anemones encountered fewer prey than solitary sea anemones. Upstream neighbors can deplete water of prey as flow passes over the clone. The encounter, capture, or retention of prey by downstream sea anemones was independent of flow. Although these predators encountered fewer prey than solitary sea anemones, they retained approximately the same rate of prey. For benthic suspension feeders, turbulent and wavy flow enhanced encounter rates for passive prey but not for prey with active swimming behavior. Higher encounter rates of passive prey did not result in higher capture or retention rates. Similarly, feeding in the presence of neighbors lowers encounter rates but retention efficiency remains the same in weak and strong wakes. This study highlights the use of realistic flow conditions, prey with swimming behavior, and in the presence of neighbors to examine passive suspension feeding in benthic organisms. Soil is vital to humankind and our livelihood.Soil processes affect the quality of the food we eat, the water we drink, the air we breathe, and is the foundation of our living and transportation infrastructures.As the world’s population continues to grow and society expects a wider range of food selections, to provide this more selective world with nutritious food and feed will largely depend on our ability to maintain and sustain productive agricultural soils.Recognizing that soils have a central place in achieving food security, we note that the available arable land resource is decreasing at an alarming pace.In fact, we are at a point in time of what could be designated as a decade of peak agricultural land globally, indicating that the world’s area of productive arable land is nearing its maximum.This is so because the annual expansion rate of new farmland is becoming less than the land area removed from agriculture.Causes for reduction in productive farmland are its conversion to urban and industrial development,taken out of production because of it being degraded such as by soil erosion, compaction or salinization, and threatening public health because of soil contamination.It is estimated that about 15% of the world’s total land area has been degraded.In addition to the acreage of productive agricultural land decreasing, freshwater resources are also becoming scarce as populations increase, demanding additional water for domestic and industrial use.

Moreover, while diverting increasing volumes of water for maintaining healthy freshwater environments and ecosystems, water for irrigated agriculture is becoming restricted in many arid and semi-arid regions.We note that whereas only about 15% of the world’s agricultural land is irrigated, it produces about 45% of global food production and even more for fruit and vegetables.As high-quality freshwater availability is becoming a major constraint globally, increasing water use efficiency of irrigated agriculture is becoming essential.This form of agricultural intensification means to do more with less while simultaneously minimizing its environmental footprint and mitigating its contributions to climatic changes and/or adapting to it.Additional constraints on agricultural production include public debates and policy changes regarding its environmental impacts on soil, air, and water quality, the use of genetically modified foods, as well as the threat of a changing climate.Among various mitigation and adaptation options, one calls for sustainable intensification of agriculture, water- and climate-smart agricultural practices, as well as for conservation agriculture to improve soil health and to minimize environmental impacts on soil, water, and air quality.In addition, other non-soil related practices are suggested, such as closing crop yield and nutrient gaps and reducing food waste.Collectively, any of these land and water management practices serve to enhance soil quality, reduce the environmental footprint, conserve freshwater resources, reduce soil degradation while sustaining food production.Hence, the preservation of our soils is crucial.It is no wonder then that we must address causal factors of soil degradation, such as by water and wind erosion, soil contamination and soil salinity.We note that the room to expand cropland beyond the estimated 12% of the terrestrial land surface is limited, because most productive lands are already in agricultural use, whereas converting additional land would lead to either increasing environmental impacts of marginal lands or destruction of the world’s richest natural ecosystems.The importance of sustainable land management was recently acknowledged in the IPCC.Special Report on Climate and Land , highlighting interactions and feed backs between our changing climate, land degradation, sustainable land management and food security, stating: “Land provides the principal basis for human livelihoods and well-being including the supply of food, freshwater and multiple other ecosystem services, as well as biodiversity.Human use directly affects more than 70%of the global, ice free land surface.Land also plays an important role in the climate system.” Among the most prevalent forms of soil degradation, in addition to air and water erosion and soil contamination, is human-induced soil salinization.Soil salinization occurs by the accumulation of water-soluble salts in the plant rooting zone, thereby impacting water and soil quality, and inhibiting plant growth.Osmotic changes in soil water caused by total salinity reduce the ability of plants to take up water from the soil.In addition, specific ions such as Na and Cl negatively impact plant physiology and become toxic when absorbed by the plant at higher than beneficial amounts.Besides, Na accumulation in surface clay-mineral soils cause soil swelling and dispersion thereby reducing water infiltration and soil drainage and causing water logging and flooding in sodic soils.The geological salinization is by far the largest fraction of the approximately 1 billion haof salinized land, making up about 7% of the earth’s land surface.In addition, approximately one-third of the world’s irrigated land is salt-affected in some way , equal to about 70Mha.

The rates of predator-prey interactions were used to calculate efficiencies

Emergent technologies and methods are also applied to these questions, such as “advances in geometry, graph theory, topology, control theory for chaotic systems, and novel approaches for managing and modeling uncertainty.” Mathematics, they said, is considered the most fundamental language for an understanding of “biocomplex systems.” As Anna Tsing writes, “The common assumption is that everything can be quantified and located as an element of a system of feedback and flow.” One research program that successfully met the NSF’s biocomplexity funding criteria in 2000 was the Bahamas Biocomplexity Project , a large multi-year proposal that situated itself as a mediator between two institutional milieus. The first was that of the NSF’s biocomplexity research program, as just described, with its concerns with interdisciplinarity and the production of more socially robust and politically relevant knowledge. The second was the ongoing and highly political marine conservation scene in the Bahamas at that time, which I will describe. I hope to highlight one of the ways in which scientific research practices manipulated and produced their own social reality in order to create the commensurable information required by the BBP. In 2000, the Bahamian government announced its political intention to create a Marine Reserve Network , which would initially include five protected areas within the borders of the archipelagic nation. These areas, following the trends in international conservation science,gutter berries were to be designated as “no-take” reserves, areas in which the extraction of any form of marine resources is prohibited, and they were a response to the concern over perceived environmental degradation.

The announcement of the MRN project came after two years of planning meetings and negotiation sessions between the Bahamas Department of Fisheries, now Marine Resources, and environmental non-governmental organizations , including the Bahamas Reef Environmental Education Foundation and the Nature Conservancy of the Bahamas , who fear that sustained over-fishing is leading to the destruction of the Bahamian coral reef system, biodiversity loss, declines in fisheries productivity, and who predict that the Bahamas will go the way of the rest of the Caribbean and lose species diversity and valuable commercial fish stocks. The proposed reserves are located near the clusters called the Berry Islands, the Exuma Islands, the Bimini Islands, and the larger islands of Abaco and Eleuthera and were thought up largely as a response to the declining populations of Nassau Grouper, Caribbean Spiny Lobster, and Queen Conch, the primary commercial species in the region, described as the Bahamian “holy trinity.”The BBP, a loose entity made up of researchers from fields including anthropology, biology, oceanography, physics, economics, and mathematics, stepped into this scene to conduct long-term, multi-phase research on these proposed marine reserves, their feasibility, and subsequent systemic effects to produce policy recommendations for the Bahamian government as well as detailed and predictive models of coral reef functioning that could possibly be transferred for the management of other reef systems. The “Social Working Group,” lead by an environmental anthropologist and an environmental economist, was supposed to go about “assessing patterns of resource use and attitudes about resource conservation among stakeholders,” using survey technologies to compile comparable data sets from communities situated near proposed reserve areas or those identified as having an economic reliance on fishing.

Anthropology, as a discipline representing the behavioral sciences, was enlisted here in order to make sure that the knowledge produced for the modeling project reflected the cultural reality of the Bahamas. Anthropology was seen as the disciplinary voice of the local, as the discipline that would legitimate claims to social truths made by the BBP.The findings of the Social Working Group have been recently collated, summarized, and published separately from the other BBP working group results and projects in an article that stresses the necessity of socioeconomic assessment as an aspect of environmental management.The authors, Broad and Sanchirico, focus their analytic attentions on the quantification of what they describe as socioeconomic variables and environmental perceptions of individuals and communities that have been gleaned from the fieldwork. Variables, for these social scientists, are those traits that can be pinned to particular individual or community entities and then compared across a number of individuals or within communities. They assessed specific variables found within the completed fieldwork data, such as the “demographic variables” of individuals surveyed, i.e. their age, number of children, level of education, marriage status, gender, occupation as either in tourism, fishing, or other, household income, if the mother was from the specific settlement, if past generations of their family had been occupational resource users like fishermen or farmers, if they had heard of marine reserves or been to a reserve meeting, and how frequently they went to the sea to use marine resources. These variables were calculated for five particular communities, identified as small islands or specific settlements on larger islands, and in total across all 485 survey participants. Perceptions are also understood here as variables, but they are variables that pertain to participant’s responses to particular management oriented ideas around “environmental conditions” such as the state of local marine conditions, the level of threat to the marine environment, and the state of the enforcement of fishing regulations. These “perceptions” where then paired with variables such as the participant’s household income, fishing reliance, tourism reliance, and whether they thought there should be a local marine reserve put in place in the area.

The demographic variables are described as concrete “material aspects of life” while the perception variables are described as individual and community “perspectives.” When statistically linked, the material aspects of life can be shown to have more or less influence on a certain perspective in a certain place, and this data,strawberry gutter system when collated for specific communities, can become a management tool.The BBP is a prime example of ways in which “the social” becomes implicated in contemporary conservation science projects in the living laboratory of The Bahamas and elsewhere. It is my contention that in order for an increasingly necessary sociality to become scientifically implicated in the production of such peculiar politics, it must first be assessed and formalized, which implies that it must be conceptually formed and designed- that is, made assessable in the first place. The development and deployment of the BBP’s social science survey and the results gleaned from the data demonstrates the potential possibilities and pitfalls of this work. Based largely on my strange experiences with the project, I have come to see the socioeconomic survey as a powerful example of the way in which biocomplexity research activated certain instrumental notions of individuality and community as sociality bolstered by a certain notion of anthropology.The survey itself was concerned with statistically elucidating the connection between prevailing local economic conditions in an area and the variety and intensity of marine resource extraction conducted by individuals within that area as well as what they thought about the appropriateness of such extraction- all part of what the project refers to as “human and environmental interaction,” mentioned above. One of the defining features of the survey’s demographic variables is the categorization of each person interviewed by their current occupation, with a focus on either tourism or fishing, and the occupational history of their parents and grandparents,with a focus on “resource use.” Following Julia Paley, this can be thought of as an articulation of subjectivity, activated in spatial and temporal frames, wherein occupation is tied to particular extractive activities, appearing later in the survey, involving notions of self-interest centered around livelihood.These interested notions are productive of idioms of person hood based on an assumption that individuals have rights and claims to extract value from the material environment, and that the form these claims to value take represents what distinguishes one person from the next and one settlement from the next in terms of perspective and perception.

In the language of conservation and development, people making similar claims- having similar perceptions linked to specific variables- can then be lumped into stakeholder groups, and these groups then become instrumentalized actors- or, in the case of the BBP, groups based on occupational interest create the category “commercial fisher” or “tourist employee,” and groups based on location become “communities” which have their own distinctive traits depending on the stakeholder groups within them. These groups are made distinctive and therefore amenable to targeted management. Following Hayden, I would like to point out that it is not the identification of interests which explain social processes- i.e. the explanation that fishermen have particular interests in marine resources- rather, it is the analytic assembly of interests, values, and interested persons that is itself processual and worthy of study.Thinking in this manner allows the analytical focus to come off of assessing the extractive traits of those who fall within a given occupational category or who manifest a predetermined variable, and shifts it to the consideration of the work such notions do for those who would deploy them, such as the social scientists of the BBP. Interest, value, and variables then become ethnographic objects. Such a focus helps demonstrate that “there can be no production of value without processes of subject formation,” and the persons and communities defined by occupation produced by the BBP demonstrate the instrumental creation of a realm of inclusion and exclusion dictating the ways in which people are recognized and assessed within a particular paradigm, in this case nascent biocomplexity research.The socioeconomic surveys employed by the BBP, rooted as they are within a particular logic, instrumentalize and activate particular figures of the local, rural, and of the Out Islands, usually in occupationally evaluative terms and variables that come to stand for a sort of person hood. Fisherman becomes an occupational category that signifies particular extractive activities for self-interested gain and claims to tradition, lineage, and subsistence, all accumulations of value, which are different from activities connected to the category of tourism employee. Tourism and fishing have become construed here as existing in an inverted relationship, with fishermen hypothesized to be less likely to support marine reserve creation and tourism employees more likely, based on what are described as different forms of interaction with the marine environment which are linked to diametrically opposed perceptions of that environment and what to do with it. Further, when and if the individual survey results are statistically aggregated and linked to the other forms of BBP research, community and locality may become reactivated as sites which also have a self-interested nature and attendant claims and rights to accumulate value. To evoke Julia Paley again, statistics becomes a tool for social diagnosis, wherein research subjects become the object of study and are prevented from acting as authors- their participation becomes drastically proscribed. Interviewers, such as myself, who struggle to fit the given answers to survey questions into the format of the survey mode of information in order to create variables, also become produced as objects of the survey, standing in as representatives of the double legitimacy of anthropological social research and the transparency of the survey method itself. The Social Working Group and its publications are part of the orchestration of socially robust knowledge that comes with contemporary environmental management practices.The sociality of the research must be demonstrated, as Strathern would say, and it must be stabilized. What has been produced here by the BBP is a socioeconomic assessment which does this work of stabilization and demonstration in order to make Bahamian social forms which are manageable and an environmental management apparatus which has socially responsible options. As a participant myself in this social scientific design project, I wonder about the possibilities for targeted management mentioned by Broad and Sanchirico. It is one thing to show that there are a diversity of perspectives held by rural Bahamians about the marine environment and that management should recognize this diversity, as these authors argue, but it is quite another to make this diversity demonstrable, numerical, and localized- to operationalize constructed variables and orchestrate this data into a tool for targeting specific groups and communities for conservation education, economic development based in ecotourism, and various other forms of environmental management.simultaneously abstract and material, that ecological, biological, and conservation scientists and managers variously do. This work influences the way that the world is perceived, how new kinds of relations are formed, how futures are imagined, and what should be done about it all.