Performing nature's value: software and the making of Oregon's ecosystem services markets

Geographers of technology illustrate software code's contexts, effects, and agencies as they shape urban space and everyday life, but the consequences of code for nature remain understudied. Political ecologists have critiqued remote sensing and Geographic Information Systems (GIS) based conservation projects, but have not engaged more broadly with the role of software in the contested production, circulation, and application of ecological knowledge. Yet, around the world, data analytics firms and conservation nonprofits argue for optimizing environmental management through faster and bigger data collection and new techniques of data manipulation and visualization. I present a case study from the US state of Oregon, illustrating how conservationists and environmental regulators employ computer programming to plan markets in which entrepreneurs restore stream and wetland ecosystem services to earn offset credits. In these markets, code-executed algorithms constituting spreadsheets, web maps, and GIS utilities generate, relate, and make sense of the data that define credit commodities. I argue that code tends toward three effects: producing a landscape defined by wetlands' modeled value, performing social relations associated with nature's neoliberalization and financialization, and legitimating these moves. Although emphasis on the performativity of code and other technological objects is warranted, the contexts in which these are authored, deployed, and evaluated should remain central to understanding environmental governance. This is to caution against seeing technology as reducing nature and society to state or capitalist rationalities and to hesitate to differentiate prima facie code's work on space and on nature. I call for bridging political ecology and geographies of technology in ways that can explain how code is generative of environmental knowledge, change, and conflict.

Around the world, ecologists, nonprofits, entrepreneurs, and state regulators are turning to computer programming as a key means of addressing conservation challenges. Assured the answer is yes, conservationists ask, "can technology save the planet?" (Gilpin, 2014). From Silicon Valley startups writing programs that turn satellite imagery of Amazon deforestation into data for both activists and hedge fund managers (Samuels, 2013), to scripts that coordinate "smart" meters of everything from home energy use to reservoir water levels in an "internet of things," algorithms executed by code potentially fundamentally reshape environmental management through automation, new techniques of visualization, displacement of existing knowledge regimes, and integration with processes of financialization. While geographers of technology have fruitfully developed ways of thinking about code and the making of everyday urban space (Dodge and Kitchin, 2004;Graham, 2005;Graham et al., 2013;Kitchin and Dodge, 2011), the goal of this article is to build an understanding of software's role in environmental knowledge, change, and conflict (cf. Büscher, 2013 on social media and conservation). Building from political ecologists' long history of investigating the creation, uses, and abuses of knowledge about particular environments (Robbins, 2012), this approach means understanding the effects of different kinds of algorithms (Graham, 2005), the agency and power of code (Dodge and Kitchin, 2004;Graham et al., 2013), and the contexts in which algorithms are written, deployed, and evaluated (Faraj and Azad, 2011). Here, I report on the making and contestation of code in an ecosystem services market in the US state of Oregon as a way of illustrating key effects, agencies, and contexts of code in environmental governance today.
One recent fall morning in exurban Portland, Oregon, an environmental consultant stood in a field of tall grasses evaluating his latest wetland restoration project. What led to the changes in the landscape here, this former farm field being restored to a floodplain wetland? Entering data into a spreadsheet on his tablet computer, the consultant returns to the office later that afternoon to use GIS and web-mapping utilities like Google Earth to estimate indicators of the wetland's ecological performance, such as its distance to other nearby habitats. The spreadsheet runs the results through algorithms that automatically generate numeric scores indicating the social, ecological, and economic value of the 20-acre restoration site. It will determine how many credits can be sold into a state-managed market for ecosystem services (ES). In these markets, businesses such as housing developers that fill or remove wetlands and streams purchase offset credits to show regulators that they have compensated for their impact. The profit margins of entrepreneurs speculatively restoring ecosystems to sell credits depend on the score the algorithm returns. So too do Oregon Department of State Land (DSL) regulators' ability to meet their mandate of ensuring that the right kinds of wetlands are restored in the right parts of the watershed.
The same day, the ticket reservation program for Alaska Airlines suddenly stopped functioning, leaving travelers at airports across the west coast of the United States unable to check in and board their flights. Passengers waiting to depart from the Portland airport (PDX), took to social media to vent. One tweeted, "#alaskaair down in #pdx too, longest line I've seen in awhile. :(. Everyone just standing, waiting" ( KING5 News, 2012). The glitch was not necessarily an uncommon event, but following Kitchin and Dodge (2011), when the check-in scripts crashed, computer code "transduced" the space into becoming something very different: what was a hub of international commerce turned into a glorified holding cell.
Here I argue that like airports, environmental markets exist through code, with three related effects. First, code produces marketable nature. Airline check-in systems depend on programming and have been integrated into the travel experience to the extent that if the system fails, airport terminals become altogether different spaces. In the same way, Oregon's ecosystem service market depends on code in Excel, GIS, and web-mapping utilities such that if the code doesn't work, consultants' rebuilt wetlands do not count as restored, for all regulatory and market intents and purposes. And as the airport becomes with code a space for commerce, Excel and other tools contribute to producing the restoration site as a site of state and market environmental management where revenue-generating offset credits can be calculated (or, a lost investment with unknown environmental outcomes). Some wetlands will never be created because of what code computes; other restoration projects will happen in specific spots in the landscape because of it. Second, software performs "nature's value," affording the kinds of remaking of state, capital, and science associated with the emerging international project of financializing natural capital (Johnson, 2013;Robertson and Wainwright, 2013). Finally, rather than appearing from nowhere, software materializes these relations, and the specificity of code matters to the process. In particular, its blackbox nature makes its creation and operation seem given, tending to depoliticize its effects (Graham et al., 2013;Robbins, 2001;Wilson, 2011).
The use of software in ES markets does not make the code production of nature a fait accompli. Code-executed algorithms do often operate automatically, remotely, and as blackboxes, but even so the context of code in structures of authorship, deployment, and evaluation situates its agency in remaking environmental governance. Political ecologists and geographers of technology alike are interested in the agential properties of nonhumans and the extent to which technological objects like the airport check-in network, the restorationist's spreadsheets, or surveillance cameras are as material in the world as abstract relations of power (Birkenholtz, 2009;Braun and Whatmore 2010;Kitchin and Dodge, 2011;Lansing, 2012;Meehan et al., 2013;Sundberg, 2011). Arguing that one or the other provides the ultimate analytical lens is unproductive. Some strive especially to illustrate nonhumans' agency in producing power effects (Holifield, 2009;Meehan et al., 2013;Mitchell, 2002;Sundberg, 2011), while critics claim their focus overlooks the role of existing, uneven social relations, and the making of winners and losers (Castree, 2002;Kirsch and Mitchell, 2004;Lave, 2015). And so, many in turn emphasize the operation of neoliberal, colonial, or other relations to explain and critique environmental change and conflict, but bracket power as if it is apart from everyday objects which perform operations on the world (cf. Bakker and Bridge, 2006).
One way forward is to double down and ask what it is that makes objects do the work they do. This is to accept both that things work in the world (via the intentions of their designers (Akrich, 1992) or otherwise (Braun, 2014)) and that they are subject to other forces, ones which they may even reproduce. Through their designers and beyond them, objects like code "script" action (Akrich, 1992), potentially endearing themselves to their users relative to those users' expectations, which are in turn shaped by users' embeddedness within overlapping institutional positions and "framings" (Mitchell, 1988;Bingham, 1996;Callon, 1998;Lave, 2014;Porter and Randalls, 2014). The language of endearment and expectations-two sides of the same coin-is one attempt at steering political ecology and geographies of technology through the treacherous straits of things' performativity and contextualization (and nearby realms of technological and social determinisms) (Butler, 2010;Muerelleile, 2013). Such an approach sees objects as practiced (Akrich, 1992;Faraj and Azad, 2011) and irreducible to state, capital, or civil society, but not outside of them. Objects' polyvalence is not their radical indeterminancy but their potential with certain conditions or features to do violence to socioecologies, act neutrally, or to transform the world for better.
First, I will note how others have sketched out such an approach, before turning to the contexts, effects, and agencies of software in Oregon's ecosystem services market. I base the argument thereof of interviews with market actors in 2012, participant observation of ecological assessment technologies in action, and a recovery of algorithms' histories from user manuals and technical reports. I find that emphasizing what code does not perform in the market provides opportunities to understand limitations in the efforts to make ecological processes commodities.
What are code's contexts, effects, and agencies?

Geographies of technology
Geographers of technology researching urban governance (Dodge and Kitchin, 2004;Graham2005) and experiences of place (Graham et al., 2013) highlight the contexts, effects, and agencies of computer code. They claim that one effect of code is that it produces space. Kitchin and Dodge (2011) argue that spaces are "ontogenetic": they do not preexist themselves but continually come into being and do so more than ever through code. An airport becomes a space of mobility and commerce through the code that runs the check-in computers, the flight equipment, and so on. In their adopted terminology, coded objects like check-in terminals transduce space, a verb meant to signal the automatic, constant, and often unrecognized operation that makes code agential. Furthering Kitchin and Dodge's analysis, Graham et al. (2013) name "code power" the legitimating effect in which code's automaticity and its behind the scenes work lead its users to take it for granted. Kitchin and Dodge (2011, 255) call on other researchers to explore "archaeologies of algorithms," or the contexts that shape the production, application, and circulation of code programmed for governance. In this vein, Kitchin and Dodge differentiate the agency of code objects between code/spaces and what they call "coded spaces." Code/spaces, like airports, by definition require software programs to operate or they become different kinds of spaces. The algorithms behind the check-in machines exist in order to make airports spaces of mobility. Coded spaces, however, may be saturated with software, but the role of code is at best incidental. A session at an academic conference, they suggest, could still go on and provide the same kind of learning space if PowerPoint crashes. For Kitchin and Dodge, the distinction between the software scripts running PowerPoint at a conference and airport check-in programs is one of centrality.

Performativity
I argue the distinction is more contextual, depending on what users desire and expect of the space and code. The airport is a code/space not simply because certain software is needed to run the check-in system, but because of airlines' desire to digitize ticket printing and even the banal fact that airport security wants passengers to have tickets before they board. As geographers of technology understand space as something that is continuously made, scholars working from actor-network theory and feminist perspectives understand that social relations do not exist beyond their continued performance, while adding that these performances occur within evolving contexts (Butler, 1990;Callon, 2007;Latour, 2005). Performative relations, like those demarcating gender roles, "are made by the various ways and manners in which they are said to exist" (Butler, 1990;Latour, 2005: 34), which is not to dismiss their importance, but to better understand how social reality is repeatedly "assembled" into being rather than projected from some primordial font. In one example, Latour (2005) sketches something like code/space: a computer plays the part of an intermediary to action-working as expected and merely transporting a user's intent-but when it crashes, it becomes a mediator where its specific existence-its processing power and opaque coding-matters. In another example, silk and nylon do not just reflect class differences-their differing physical qualities enable those class differences to be expressed in the first place.
The insight that social relations or even scientific theories gain force only through continuous performance is something that has been developed by economic sociologists (Callon, 1998;Mackenzie, 2006;Svetlova, 2012) and geographers doing social studies of finance (Christophers, 2014;Muellerleile, 2013). Callon has proposed that economists actively intervene in markets through the very same theories they develop to represent markets. Mackenzie (2006) presents a case of so-called strong performativity, where the Black-Sholes model of options prices remakes reality in its own image by prescribing traders' actions. Svetlova (2012), on the other hand, depicts a version of weak performativity. The traders she studied did not find themselves bound to their price-prediction software: "models are manipulated, regularly overruled by humans and used as tools to obtain the results that their users consider to be correct." "In such cases," she writes, models become intermediaries: they "are simply channels used to transmit the financial actors' judgements into numbers" (Svetlova, 2012: 420).
This echoes Akrich's (1992) notion that technologies "script" action for their users, who may choose to modify or resist playing the part designers intended for them. For Svetlova, such decisions are contextual: the traders she studied worked within institutional cultures that allowed them to play with the numbers. Chiding Callon for not grasping the role of institutions in situating the performativity of things, Butler (2010: 152) notes: "[models] function performatively, which means that certain kinds of effects can possibly follow if and only if certain kinds of felicitous conditions are met." Callon (2007: 13) himself describes contexts as "frames" which bound performativity. He jokes that dominant economic theory is mismatched with many real-world contexts, limiting its success there, while: "In the paper world to which it belongs, marginalist analysis thrives…Its diffusion is possible only if the environment that the statement requires is available throughout its circulation and in all the places to which it leads" (Callon, 2007: 26-27). Objects may create their own contexts or worlds-models can endear themselves to their users, providing "felicitous conditions." But even then, the success or failure of economists' models and software is relative to what their users expect of them.

Neoliberal natures
Political ecologists help deepen the definition of context. They are increasingly interested in financial institutions and models as they come to bear on ecosystems, weather, climate, and life itself (Collard and Dempsey, 2013). What they suggest is a fundamental linking of nature with capitalism through a performative "bringing-into-being" (Johnson, 2013: 6) of the environment as quantified, financialized risk. What specifically enables this financialization are calculative devices like the forecasts and models which define securities and inform commodity traders' strategies.
New tools for modeling the environment, however, should not be understood as simply reflections of capital's intent to subsume nature (cf. Smith, 2007). They are not pure expressions of capitalist or state rationality, so much as the particular computational ensemble without which nature's financialization would not be possible in the same way. They may mediate neoliberalism without being a product of it, and even if so, without being reducible to it. Neoliberalism describes the contingent outcome of state, capital, and civil society (e.g., conservation science and nonprofits) translating information, objects, and rules among one other. Capital needs the ecological sciences to describe nature, ideally in a legible manner (Robertson, 2012). But this process is rife with misfires and mistranslations, even when deliberate. Jessop (1990) proposes an "evolutionary" perspective to how different state strategies and techniques-among which should be included ecosystem valuationbecome hegemonic in any period. They may gain prominence through "strategic" or purposeful advancement by actors, as well as emerging more "relationally" as they are passively "selected for" within existing sets of social relations and go on to both reproduce these and spawn new formations. (1) A model of ecosystem functioning made by conservation scientists may eventually find itself gaining favor among regulators or entrepreneurs in the financialization of ES, but its baggage is its own distinct effects and conditions. As Callon says, it comes with its own world. This is to explain the political performance of technologies in a nondeterministic way that cautions against both utopian and dystopian readings of them.

Scripting ecosystem services
What happens when we look at actual instances of nature's neoliberalization? In US markets for wetlands and streams, environmental regulatory agencies like the Environmental Protection Agency (EPA), the Army Corps of Engineers (corps), and in Oregon, DSL, permit public and private entities to compensate restorationists as a way of mitigating resource impacts that the agencies would otherwise prohibit or ignore. In Oregon's market, permittees range from private housing developers to public parks departments. Land managers establish "mitigation banks" of restored ecosystems in order to sell offset credits to permittees. Banks are financial and legal instruments land managers draft and follow as they sell credits from a particular property where they have undertaken ecological restoration. Banks must be approved by the regulatory agencies, which have signed off on around 30 in Oregon since the practice started there in the late 1980s. Bankers-landowners, public agencies, or specialized restoration firms-hire private consultants with scientific training to run ecological assessment questionnaires on restoration sites in order to determine how many ecosystem service credit commodities they will have to sell. In Oregon, and increasingly elsewhere in the United States as well, specialized banking firms are securing "conservation finance" from pension and other large funds to do speculative, for-profit restoration on a larger scale.
Ecosystem services markets require some basic definition of the commodity to be traded as well as information about the condition or functioning of site-specific services, lest the buyer discover they got something other than what they bargained for (Boyd and Banzhaf, 2007). In Oregon, the commodities which represent the benefits of ecosystem restoration are not measured in conventional units like acres, but in terms of ecological functions like floodplain surface water storage. They are also measured in terms of "value," or as DSL (2011: 8-1) describes it, "the importance or worth of a wetland function to societal needs." A growing number of policymakers and scientists believe that functional assessment can improve upon existing methods by clarifying what processes, like surface water storage, are lost at impact sites and gained over time at compensatory restoration sites (Hruby, 1999;Ruhl and Gregg 2009; on assessment tools in general, see Porter and Demeritt, 2012;Tadaki and Skinner, 2014). To account for ecological functions and their value, consultants answer questionnaires that have them on site as well as in the office, using ArcGIS and other mapping utilities.

How does this site score?
There are three moments in the making of Oregon's ES markets and code plays a part in each. In the first, consultants conduct a spreadsheet-based assessment called Oregon Rapid Wetland Assessment Protocol (ORWAP) in order to determine how restoration has improved a site's ecological functions (Figure 1). ORWAP's code allows regulators to "see" functions and values, to standardize the practice of assessment and to screen bad projects. The algorithms embedded in ORWAP are straightforward in that they don't require scripting outside of prepackaged functions such as IF and MAX and some Visual Basic coding to automate things. A typical question on the wetland assessment might ask its user about the seasonal extent of surface water on the wetland, requiring the consultant to imagine the site as it were during a different time of year, or perhaps motivating them to scan Google Earth imagery. The user selects one of the multiple values offered as an answer to each question. Depending on the consultant's answer, Excel uses the IF and MAX operators to automatically combine it with answers from other questions to produce a final score that characterizes the ecological functioning of a site. Some indicators are only used depending on the type of wetland and the scores of other indicators. If the answer given for the seasonal extent of surface water, for instance, were greater than 50%, then it might combine with scores that measure indicators of salmon habitat, since salmon use flooded wetlands. The overall result is an extensive series of measures of ecological process that reference each other across several sheets. The final score tallies the credit commodities that the bankers consultants work for can trade.
Excel's IF and MAX functions translate consultants in judging ecosystem process. The code works to enroll its user in the intentions of the program: to mirror the complexity of the wetland itself within cross-referenced spreadsheet cells. One of ORWAP's programmers sees potential to account for even greater ecological nuance, "… to go way beyond Excel and Visual Basic to doing even better algorithms that better capture the complexity of nature and still are transparent." Consultants, however, find themselves frustrated at ORWAP for not being transparent enough, a source of great tension between regulators and consultants. One recounted how he had to run the calculator twice on the same plot of land because the first time, his staff got a wildly unexpected result. Even on the second try, he himself ended up with an unsatisfactory result. For consultants, it is hard to "get under the hood" of ORWAP to determine exactly why they are getting unexpected results. Yet they need the certainty of knowing what ecological outcomes-and hence the number of salable credits-a wetland will deliver. Many questions are linked with one another, and tweaking the answer to just one may not correct the situation. As a programmer noted, this redundancy allows for better verification of the answers. In the attempt to mirror complexity, the idea is that ORWAP's ecological indicators, like surface water extent and suitability for salmon, are not only crossreferenced in the spreadsheet, but on the ground. This reflects the model-building philosophy of one ecologist designing ORWAP. Rather than sketching a "top-down" conceptual diagram of how the literature says functions are related and then finding observable indicators from which to measure them, he goes into the field and thinks about what can actually be observed, and then builds these indicators up to functions. Some consultants and bankers, however, wish they had had a chance to be more involved in authoring the protocol, even though regulators were wary this would present a conflict of interest. DSL regulators need Excel's abilities to script across sheets and run IF and MAX operators in order to fulfill their mandate of a standardized accounting for wetland ecological functions. Calculating the ecological condition of restored wetlands-the presence or absence of certain indicators rather than their interaction-would not require the tangle of Excel code that is ORWAP (Hruby, 1999). Functional assessment is not new, and DSL tried to develop one in the early 90s without spreadsheets. Oregon Freshwater Wetland Assessment Methodology (OFWAM), as that method was called, relied on present/absent questions rather than numeric scoring of individual indicators' performance. Its outputs were qualitative and seen as too general. Dissatisfaction with it led to the push for ORWAP, which combined "logical" present/ absent questions and "mechanistic" performance questions scaled from 0 to 1. Hruby (1999) suggests a similar transition occurred nationally at this time from logic to mechanistic models. Spreadsheets were a way of automatically calculating across many questions that now dealt with ecological performance and were quantitative (Hruby et al., 1995). Excel did something specific: it quantified individual assessors' knowledge of wetland functions in a standard format.
Like wetland condition, consultants' best professional judgments do not lend themselves to algorithmization, and that kind of assessment would not require a standard spreadsheet everyone works from anyway. Instead, regulators have aimed at producing the complexity of nature within a spreadsheet. One frustrated consultant does feel that assessments that incorporate more professional judgment may be better. Another suggests that functions aren't even codeable: "getting an Excel spreadsheet to spit out what is an appropriate mitigation project, in the end, you can't do that with function for sure." In spite of such doubts, consultants have not presented a coherent alternative to ORWAP and even they ultimately agree that functions-oriented assessment and crediting is ecologically sounder than previous methods.

Should this site be approved?
ORWAP also serves as what regulators deem a "screening tool." Regulators expect consultants whose sites score low will probably not bother to propose a bank unlikely to be approved: Well, it's a good screening tool to at least give someone the comfort of knowing I might have something of worth here to approach the regulatory agencies with, you know? Versus like well I got these 35 acres that are swampy all the time (laughs), but it's surrounded by reed canary grass [an invasive weed] and you're going, dude, right here's probably not going to be the best. Part of what makes ORWAP useful in this sense is the extensive spatial analysis consultants conduct. The motivation behind mapping is that wetlands' value is considered by regulators and conservationists to be spatial. DSL, EPA, and the corps are striving to make bankers do restoration work that is well positioned within watersheds. For them, achieving high value restoration projects depends on the opportunity and constraints wetlands face in providing services to society (Ruhl et al., 2008). For example, a wetland in a remote watershed has no opportunity to provide property owners flood mitigation services and so is less valuable.
The first spatial question consultants must answer is whether their restoration site is in what market makers call a priority area. Conservationists have created a "Synthesis Map" which collates five maps from resource agencies and nonprofits like The Nature Conservancy (TNC) of places deemed crucial to protect as habitat. TNC's contribution arose out of the organization's extensive ecoregion mapping program between 2000 and 2004. Staff first decided upon factors influencing the suitability of different habitats for selected species and spatially overlaid them, summing them into a suitability score for each location. Overlaying different social and ecological processes helps decision makers promote restoration that is holistic and, as one regulator put it, "to target what we know historically should or could occur there." To this end, TNC funded the creation of SITES, a GIS-based decision-support software tool the organization funded which is comprised of a variant of the optimization algorithm Marxan. It calculates the most conservation-effective portfolio of sites given their different suitabilities and nearness to existing protected areas (Possingham et al., 2000). Balancing these variables across 8000 sites was a linear programming task impossible to solve by hand. Marxan works through "simulated annealing"-it begins with a set of sites, randomly adds others, and compares the two sets. The algorithm keeps the least cost one and then randomly adds more sites in again. After 1-2 million iterations, SITES arrives at a bundle of sites that provide a maximal amount conservation for the least cost, or amount of land. The program is not "deterministic." Each time it is run, it outputs different optimal or near-optimal scenarios.
The final synthesis map is used by regulators to represent areas they would like bankers to restore. What prioritization is intended to do is modify the amount of credits bankers get based on whether or not they are in one of these areas: they would receive the number ORWAP tells them only if they are and less if not. Regulators and conservationists would also like to apply similar calculus to development projects so that a permittee looking to offset a wetland impact would have to purchase more credits than normal if their project was in a priority area. Instead of the entire map being in play, the number of potentially profitable sites is narrowed, though, as one consultant pointed out, those sites are also made more apparent.
The algorithm behind SITES automatically determines the best locations for restoration, but the context of its performance matters. Marxan is a rule-based approach to site selection that makes the move to modify credit amounts by site selection defensible if not deterministic. Consultants wonder how regulators came up with the priority areas to begin with, and question the utility of simply appending different habitat layers in GIS. One consultant objected, "something's always missing." In fact, although algorithms like Marxan can automatically determine an optimal set of sites to conserve, regulators allow case by case rulings on priority status. A high ORWAP score can qualify sites, as can simply being located just outside of the already defined habitat layer. Code gives regulators some legitimate starting point for how many credits they authorize bankers to sell. They are able to point to the map and say, "GIS analysis has found that these are the priority habitats, so we will give your project in this area this many credits." Part of regulators' move toward incentivizing "high value" restoration is to get bankers to consider offsite ecological conditions, opportunities, and constraints. They ask questions like, is the restoration site surrounded by invasive species that will overrun it in the future. Within ORWAP and other ecosystem service calculators, DSL asks bankers to use Google Earth and a variety of web-based mapping utilities to approximate these ecological processes. A typical indicator here might be the distance of a wetland restoration site to other similar habitats. In this, agencies draw on tools such as Oregon Explorer, a web platform that utilizes Esri's ArcGIS Application Programming Interface (API) (Figure 2). An API is a protocol for getting websites and databases to work with one another. With it, Oregon Explorer collects multiple data sets on endangered species, soils, and hydrological conditions and brings the data together into one visual frame (a "mashup") so that consultants can visualize multiple ecological processes at a site from a bird's eye view. Its "Basemap Switcher" script allows the user to toggle between say a hydric soils background map one minute and ground cover the next, which facilitates how they answer ORWAP questions about offsite and future processes and those that occur beyond when consultants can visit the site in person. It also lets them investigate the viability of potential restoration sites. APIs and mashups allow the use of more data in making market commodities, but more importantly they facilitate the hybridization and relating of that data.

How many credits can be sold?
Once bankers have done their site assessments and their restoration plan is approved by regulators, they finally receive credits to sell. One of the unique aspects of the Oregon marketplace is that bankers can sell different kinds of credits from the same site. These credits represent ecological processes, and different kinds of credits may represent the same process. For instance, water that is stored in a riverine wetland contributes to both the service of flood mitigation (wetland credits) and salmon habitat (salmon credits). To sell both credits to offset two separate impacts, then, potentially means "overselling" the ecology at the site . There have been two proposals for dealing with the problem. The environmental engineering firm that developed a spreadsheet calculator for salmon habitat, suggests that its algorithms can calculate the precise functional interrelationship between two different ecosystem services like flood mitigation and salmon habitat. ORWAP's programmers are less sure, suggesting that even a supercomputer wouldn't be able to watch the assessment's 140 variables interact at once to know the relationship of one to the rest. Market administrators have taken a conservative route: when one credit is sold, all the others are deducted by a proportional amount. Water storage may contribute more to wetland services than salmon services, but because the exact relationship may not be able to be executed in Excelor determined from the scientific literature for that matter-only one credit can be sold. Technically, regulators may be able to code spreadsheets in a way that allows bankers to get exactly as many credits as they are ecological processes. But what has mattered is their expectation and comfort that reducing one credit is adequate to balance both ecological concerns and increasing the portfolio of bankers.

Produces landscapes
The context of code's authorship, deployment, and evaluation shapes what it does and does not do in Oregon's ecosystem services market. Without the algorithms coded in various mapping programs, and spreadsheets living in consultants' laptops and State of Oregon servers, a wetland restoration site is illegible to the market. That wetland may in fact be highly functional and valuable, but it doesn't exist as a site where ecological processes are counted as ecological functions which generate a certain number and kind of credit commodities. This is what many consultants have experienced in their troubled attempts to use ORWAP: it fails to give them the credit determinations they need to justify their project. It's also the lesson of Code/Space: an airport queue turns into an altogether different space when the code running the check-in program fails to meet its promise. Put another way, the airport would not exist as a space making international travel convenient and secure to the extent it does without that code. This is not to suggest that without code and data wetlands would never be restored, but that through them, "specific pieces of matter the world over are produced (that is, their form is changed) according to the abstract laws, needs, forces, and accidents of capitalist society" (Smith, 2008: 87). With ORWAP, regulators remotely screen poorly sited projects, and with code TNC has adapted, the state identifies and points to priority areas. The "landscape signature" (Lave et al., 2013) of algorithmic incentives tends to shift wetland restoration to these places (cf. BenDor and Brozović, 2007), resulting in what Robbins (2001) calls "self-fulfilling landscapes." Without code, and when it fails, potential restoration sites become spaces of unclear ecological outcomes where entrepreneurs' return on investment is unknowable and restoration is uncertain.

Performs nature's value
In producing landscapes, code acts performatively. That wetlands are entities which bear value is something afforded by the scores ORWAP, GIS, and web maps repeatedly output. Regulators and entrepreneurs cite these scores to compare ecosystems and to make tradeoffs (Collard and Dempsey, 2013;Robertson, 2012). Although algorithms' purpose may be to represent the complexity and importance of a wetland, they do not describe an already given nature "out there" but model one in spreadsheets and servers according to a particular set of parameters (Hruby, 1999) and computational capacities, contributing to action by entrepreneurs and regulators to restore somewhere or not. In this, code mediates precisely what is captured by the phrase, the "rolling-out" of market-based environmental governance: the regulatory construction of business-friendly conditions. States may write programs that in the short-term constrain capital, perhaps by automating compliance actions for wetland developers (Clayton, 2004), but software is set to intensify firms' production of nature. Regulators and conservationists aim to employ ORWAP and the synthesis map to promote the restoration of higher scoring, higher value sites and to dissuade their degradation. Their thought is to optimize investment, to show where developers can avoid costs and to show entrepreneurs where they can get the biggest bang for their buck. The effect is a landscape of more transparent outcomes, one made to be more productive for capital as new restorationists enter the marketplace and existing ones adjust to regulators' expectations.

Accommodates trust
Code performs nature's value in a particular way. Market planners may purposefully use software to "see" value. But one of code's unintended effects is to put blinders on regulators and conservationists, leading them to seek out more and "better" technology by which to assess environmental change. Code authorizes itself in mimicking the complexity it is supposed to depict: it's a black box, but that is the point. Consultants do cry foul that the models are hard to unpack-that "something's always missing" -and assessment tools represent the main source of tension between consultants and regulators. But because the tools appear able to more authoritatively describe ecological complexity (and through standardization, describe it consistently)-and since everyone accepts that previous assessment methods are brokenthe move to new software gains some degree of legitimacy. This is one facet of what Graham et al. (2013: 467) call code's "duplicity," a deception it plays with its opacity and which lends it credence and power, all while it brackets uncaptured data and forecloses other lines of inquiry or action (Wilson, 2011). (2) The trust that code more adequately and judiciously describes to regulators the functional qualities of destroyed and restored wetlands comes at the expense of decisions that might prevent or minimize resource impacts in the first place (Kiesecker et al., 2009). Similarly, site selection algorithms like Marxan enable their users (2) Legal barriers enhance this quality, as with proprietary software and various forms of closed data.
to meet conservation goals within predetermined constraints instead of challenging those conditions. Worse, Wilson et al. (2009) found it necessary to caution other conservationists about failing to even meet the very conservation priorities they were interested in while deploying site selection tools. As standardized exams enroll teachers to teach to the test (Meehan et al., 2013) or as remote sensors help foresters point to tree cover (Robbins, 2001), so too with better algorithms do market makers point to the spreadsheet.

Code in context
Code makes for a certain kind of state, one that describes itself as a neutral arbiter, aims to split a difference between facts and values, and which tends towards depoliticization. Still, it is one thing to note that objects have particular properties and tend to behave in certain ways and another to show exactly how in a web of other actors these characteristics are significant (Birkenholtz, 2009;Robbins, 2012). Descriptions of nonhuman action often conclude with underwhelming phrases such as, "the chemical powers of the nitrates contributed to the course of events," (Mitchell, 2002: 25) without clarifying the nature of the action or its political implications (Castree, 2002;Kirsch and Mitchell, 2004;Lave, 2015). Political ecologists have done well in offering "chains of explanation" that pinpoint which political economic forces matter when and where in environmental change. Software's specific design leads to outcomes beyond any subjective intention, yet, "an actor is what is made to act by others" (Latour, 2005). Code materializes the objective state only in particular contexts requiring explanation. What made it possible for software to do its work? The context structuring code's deployment in Oregon is one of regional economics, international discourse, and institutional trajectories. Conservationists declare that perhaps the best way policymakers can incorporate ES into decision making is through reliable assessment methods (Bagstad et al., 2013). Over half of all US federal funding on ES is aimed at producing such tools (Cox et al., 2013). The Portland area is home to a boom and bust tech industry ("Silicon Forest" includes HP and Intel offices) meaning that people are often in and out of the mainstream industry and startups, making it not surprising to find the kind of ecologist-turned-coder that helped program an administrative app for the marketplace. That this person and the developer of ORWAP were contracted with is an indication of the state's turn to contract work and in general to form public-private partnerships (such as with TNC) to draw on environmental expertise in policy matters. These contracts and partnerships may be seen as necessary because the state lacks the time and expertise to code. But these moves are also driven by a set of policy principles called Enlibra, which arose out of the tense conflict over endangered spotted owl habitat in the mid-90s (Leavitt, n.d). Through Enlibra, western governors sought to rethink environmental management by pushing for more private sector alliances to stymie conflict, by prioritizing "markets before mandates," and by purifying the distinction between facts and values such that states could defensively use "best available science" and leave value decisions to "process." It is within this sort of regime that software is selected for and can do work, even as it performs and remakes such a regime. Objects like code only become discrete, meaningful entities within the social relations they may very well transform, in a process of "constant relating" (Haraway, 2003in Lave, 2015cf. Meehan et al., 2013). The objectivity of something is relational: only at times, like when software crashes, does it become "punctualized" into object-ness. And what counts as a crash or failure is contextual, depending on framing and expectations. Only within the frame where restoration is something appropriate and necessary to script do tablets running Excel and office computers loaded with GIS become coded objects in the first place, as effects of everyday state practice and expectations (Mitchell, 1988;Callon, 1998;Lave, 2014). It is to see the ecological functions that international discourse and state statute alike say should be the aim of governance that has led agencies and conservationists to expect so much from ORWAP and to pursue improving it, to the point that its mundane successes and failures at automatically calculating functions reverberate throughout the market. Sometimes, though, software is only an intermediary. DSL and other regulators regularly dismiss code as a means of governance because of what they expect it can and should perform. Algorithms like Marxan allow an automatic discovery of least cost sets of priority conservation areas, but market makers do not hold code to this since they want to be able to recognize and incentivize other restoration projects. In credit stacking, code may actually be able to capture the complexity of multiple ecological relationships, but DSL, EPA, and the corps do not think those interactions should be scripted for the market yet. The move to not script the sale of multiple credits from the same site was a deliberate choice on the part of regulators, one buffeted by claims of the potential computational expense.
A relational approach to code means illustrating its productive, performative, and legitimizing effects in explicit relation to its authors' and users' expectations of it. What does this get us? Often, objects don't reproduce structure, or they perform errantly. Under some of the conditions noted above code may at the very least do no harm if not enable progress, instead of perpetuating violence against the complexities of socio-ecological communities (Akrich, 1992;Robbins and Moore, 2015). Many Enlibra principles are textbook neoliberal, while others may work against code's duplicity. The Enlibra emphasis on "objective data gathering" by way of code clearly reinforces it. But what affords regulators the ability and desire to shun tools? The distance from headquarters felt by staffers in federal agencies' regional offices provides them room to experiment. And the fact that some of the assessment tools were developed beyond agency walls may mean regulators are less beholden to them. This signals that some of the same conditions that facilitate the move to code may undermine it. It matters that in making nature legible capital and state must articulate with science and conservationist "flanking mechanisms." Assessment tools may mediate the capitalization of nature, but are irreducible to any single logic and should be seen as arising into governance regimes favorable to them. In general, this means approaching the politics of technologies carefully: without undue awe, but also without undue pessimism about their effects.
Not indeterminate, objects are polyvalent. If we see them as generative of power in a foundational way, outside of practice, we may miss opportunities for leverage in a "politics of measure" that questions the determination of things' very measurability (Robertson and Wainwright, 2013). Regulators' discomfort with calculating stacked credits might be translated into limiting further marketization of the environment. If we marvel at the automaticity and remoteness of new technologies as they are deployed around the world to assess environmental change, but miss how they are authored, how land managers use it, and how any given algorithm is evaluated, we miss seeing much of what is at stake.

Why follow code?
Code's work and failures in ES markets raises questions about materiality-its own and that of nature's. What if anything makes code's operation in environmental governance different from its operation in spatial control? Returning to PDX, it would be tempting to highlight how unlike the airport, wetlands operate on a self-organizing, immanent logic of their own. Flora and fauna reproduce, while terminals do not; code functions only tangentially to the inner workings of market wetlands, which will get along with or without it. Such a move, however, privileges nature with a capacity for resistance that may not bear out. And while wetlands will surely regularly disrupt their scripting, neither is space a blank slate awaiting transduction. The bigger point is that the subsumption of wetlands as well as airports and anything else to technics may find resistance not from interior capacities but from the messy politics of technology itself.
Political ecologists should engage with software, not for the interesting ontological implications nature/society "cyborgs" may hold (Wilson, 2009), but as a starting point in understanding new modes of environmental knowledge, change, and conflict (Wilson, 2011). Likewise, geographers of technology have valuably illustrated how code produces space and everyday life, but code's environmental work demands attention as conservationists proclaim that technology itself can "save the planet." Through political ecology's emphasis on webs of explanation (Rocheleau and Roth, 2007), geographers of technology may also find one approach to the contextualization of code Kitchin and Dodge (2011) call for.
What is needed is a parallax view of technology: code should be seen both as subject to political economic forces, and as one in its own right. Likewise, the point is to watch the assembling of agency while asking why something is being assembled in the first place and by whom. Restorationists working in exurban Portland wetlands are land managers whose range of political economic constraints and opportunities in designing new wetland and stream habitats is mediated by software scripts. Written and implemented by environmental regulators, ecologists, and entrepreneurs, these algorithms are in every way subject to demands, desires, and limits: the stuff of politics. With code, states are producing the objective metrics and naturalizing the technical interventions that epitomize the current "post-political condition" -what Swyngedouw (2009: 602) calls the "reduction of the political to the policing of environmental change" and what others warn might lead to a "dictatorship of data" (Mayer-Schönberger and Cukier, 2013). Code tends to neutralize the environment as a realm of proper contest, serving as a Trojan horse by which nature becomes subject to further financialization and state discipline. As such, code as practiced must also be a window onto confronting these.