You are on page 1of 19

This article was downloaded by: [b-on: Biblioteca do conhecimento online UNL] On: 14 December 2011, At: 11:50

Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Environmental Planning and Management


Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/cjep20

Why is integrating policy assessment so hard? A comparative analysis of the institutional capacities and constraints
John Turnpenny , Mns Nilsson , Duncan Russel , Andrew Jordan , Julia Hertin & Bjrn Nykvist
a a c d a b a

School of Environmental Sciences, University of East Anglia, Norwich, UK


b c

Stockholm Environment Institute, Stockholm, Sweden

Environmental Policy Research Centre, Department of Political and Social Sciences, Freie Universitt Berlin, Berlin, Germany
d

Stockholm Environment Institute, and Stockholm University, Department of Systems Ecology, Stockholm, Sweden Available online: 20 Nov 2008

To cite this article: John Turnpenny, Mns Nilsson, Duncan Russel, Andrew Jordan, Julia Hertin & Bjrn Nykvist (2008): Why is integrating policy assessment so hard? A comparative analysis of the institutional capacities and constraints, Journal of Environmental Planning and Management, 51:6, 759-775 To link to this article: http://dx.doi.org/10.1080/09640560802423541

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-andconditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings,

demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

Journal of Environmental Planning and Management Vol. 51, No. 6, November 2008, 759775

Why is integrating policy assessment so hard? A comparative analysis of the institutional capacities and constraints
Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011
ns Nilssonb, Duncan Russela, Andrew Jordana, Julia Hertinc and John Turnpennya*, Ma Bjo rn Nykvistd
a

School of Environmental Sciences, University of East Anglia, Norwich, UK; bStockholm Environment Institute, Stockholm, Sweden; cEnvironmental Policy Research Centre, Department of Political and Social Sciences, Freie Universitat Berlin, Berlin, Germany; dStockholm Environment Institute, and Stockholm University, Department of Systems Ecology, Stockholm, Sweden (Received October 2007; nal version received February 2008) Widely advocated as a means to make policy making more integrated, policy assessment remains weakly integrated in practice. But explanations for this shortfall, such as lack of sta training and resources, ignore more fundamental institutional factors. This paper identies institutional capacities supporting and constraining attempts to make policy assessment more integrated. A comparative empirical analysis of functionally equivalent assessment systems in four European jurisdictions nds that there are wide-ranging institutional constraints upon integration. These include international policy commitments, the perception that assessment should support rather than determine policy, organisational traditions, and the sectorisation of policy making. This paper concludes by exploring the potential for altering these institutions to make policy assessment more integrated. Keywords: policy assessment; integration; sustainable development; institutional analysis; environmental policy; evidence-based policy

1. Introduction Recent years have seen rapidly rising interest in how to make the policy-making process more responsive to the challenges of global change, such as climate change, energy security and food production. One of the main responses has been to employ new forms of policy assessment, in order to make better policy and deliver, particularly, on either crosscutting policy goals such as sustainable development and/or improving regulation (e.g. Cabinet Oce 1999, CEC 2002a, Russel and Jordan 2007). Such interest has been mirrored within the academic literature on assessment, with the concept of integration forming a new focal point of research. Integration in assessment is often taken to mean linking the three pillars of sustainability economic, social and environmental (e.g. Lee 2002, see also Kidd and Fischer 2007). But integrated assessments do not necessarily have more sustainable policy as their primary goal; other pertinent issues such as employment, minimising legislative costs or cost eectiveness are also regarded as important tasks for assessment to tackle (Lee 2002, Kidd and Fischer 2007). Some parts of the assessment literature (e.g. Scrase and Sheate 2002) have therefore included other dimensions of integration such as: enhancing transparency and participation, integrating dierent levels

*Corresponding author. Email: j.turnpenny@uea.ac.uk


ISSN 0964-0568 print/ISSN 1360-0559 online 2008 University of Newcastle upon Tyne DOI: 10.1080/09640560802423541 http://www.informaworld.com

760

J. Turnpenny et al.

(e.g. national and local), integrating assessment results into governance, and integrating across policy sectors. Some of these additional dimensions are explicitly mentioned in guidelines issued to bureaucrats and civil servants who are meant to undertake assessments (e.g. Cabinet Oce 2003, CEC 2005a). But integration, however dened, is proving dicult to achieve in practice. First, integration across economic, social and environmental aspects appears enormously complicated. Research reveals that assessments are often highly focused on the economic, or at least the more easily quantiable, impacts of a policy proposal, and neglect the social and environmental aspects which are often not monetisable or easily comparable either with each other or with other types of impact (e.g. Wilkinson 2004, EEAC 2006, Renda 2006, NAO 2006). Second, the same research often reveals that the integration of a wider range of stakeholders perspectives into assessment remains rather limited. Stakeholder involvement is often restricted to providing input on the choice between a limited set of options, rather than radically redirecting policy. In spite of political promises that assessment will deliver greater objectivity and transparency, assessment processes appear instead to buer decisions from public scrutiny (Rayner 2003, p. 167). Third, the integration of assessment knowledge into policy making is also proving to be highly complicated; a modern manifestation of the problem of little eect. That is to say, the knowledge produced by assessments is often little used in policy making (e.g. Owens 2005), and when used, it is often to bolster political positions and justify decisions already taken (e.g. Bulmer 1980, Fischer 1995, Russel and Jordan 2007). In fact, policy formulation is often constrained well before the decision-making process has formally started, for example, by pre-existing political initiatives and policies, by administrative procedures, and by international and European Union (EU) legal frameworks and commitments (Hertin et al. 2008, Russel and Jordan 2007). How are we to make sense of this gap between aspiration and practice? A popular model of policy making draws on the rational model of assessment (for details see Hill, 2005, p. 145 et seq.). This suggests that the ability to deliver more integration in assessment processes is contingent on the availability of resources, training, quality control and, above all, political commitment at high levels (e.g. Wilkinson et al. 2004, Lee 2006). This is very much a decit model, in the sense that something tangible is somehow missing, which rather implies that providing these missing components will necessarily lead to improved integration. Nutley et al. (2002) outline explanations for lack of use of research in policy making, but focus on the micro-level institutional arrangements for connecting research to policy, such as the eect of establishing expert groups, and encouraging better collation and communication of research results. While important, micro-level arrangements are only part of the picture. One of the major debates within the social sciences should encourage us to explore the relative roles of both agency (or individual decisions), and structure (or institutional level variables) in inuencing outcomes (Peters 1998). Calls for more resources and training are very agency-centred, and may underestimate the importance of the less tangible structural constraints to, and capacities for, carrying out more integrated assessment. The rise of the New Institutionalism (e.g. March and Olsen 1989) in the social sciences has heavily inuenced thinking about these more structural aspects, which are known to shape policy processes, outputs and outcomes and also policy assessment (e.g. Russel 2005). Here, the concept of institutions extends beyond a discussion of the state or the law, and relates to the multiple and complex causal relationships between agency and structure, issues driving and aected by social change in dierent ways in dierent contexts, and at dierent levels from macro to micro (Kiser and Ostrom 1982, Hall and Taylor 1996, Clemens and Cook

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

Journal of Environmental Planning and Management

761

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

1999). Following March and Olsen (1989, p. 160), we dene institutions as collections of interrelated rules and routines that dene appropriate actions in terms of relations between roles and situations. Crucially, according to this view, institutions shape, but also emerge from, everyday assessment practices. While mapping the institutional landscape at the dierent levels at which decisions are taken is seemingly a key step towards understanding the operation of policy assessment practices, there is a surprising paucity of research in this eld. This paper has several broad aims: to consolidate some of the various literatures on the concept of integration; to analyse comparatively the practice of policy assessment systems; to relate this analysis to public policy theories; and to make recommendations. More specically, it seeks to investigate the degree and types of integration occurring in policy assessment in practice, and identies the dierent institutional capacities for, and constraints to, the various dimensions of integration. Specically, we examine empirically the attempts made by four jurisdictions (namely the European Union (EU), United Kingdom (UK), Germany and Sweden) to implement integrated policy assessment. We have chosen these jurisdictions because, although all have adopted some form of policy assessment and created a central unit to oversee it (Radaelli and De Francesco 2007), they each have dierent types of assessment as their preferred systems. Some explicitly attempt to achieve several dimensions of integration, and some do not. We focus on policy assessment systems rather than specic sectors. Within these systems, the empirical study was of a sample of individual policy assessments, some 37 cases in total covering a wide range of policy sectors and types of policy intervention, such as regulations and strategies. Exploration of each case was based rst on document analysis of available written assessments, assessment guidance and other relevant policy documents. Second, a rich in-depth contextualisation emerged from 43 semi-structured interviews with ocials responsible for each case, and 19 interviews with non case-specic actors such as the authors of assessment guidance, those involved in supporting and/or promoting assessment, NGOs, regulatory bodies and politicians. The remainder of the paper develops as follows: in the next section we unpack the concept of integration through an analytical framework employing dierent dimensions of integration and dierent levels at which policy decisions are taken in the dierent jurisdictions. Following that, we present the major empirical ndings of our analysis. In the following section we summarise the results, and then use them to draw out some of the similarities and dierences between the four jurisdictions. Finally, we draw conclusions on the potential for more integrated assessment. 2. The integration of policy assessment 2.1. The dierent dimensions of integration Integration is a multi-dimensional concept. In theory, it encompasses topics as varied as integration of policy (social, economic, environmental), enhancing institutions for management and crossing sectoral barriers, vertical coordination between tiers of government, integrating many stakeholder perspectives and conicting interests, managing knowledge and handling complexity and diversity of science (interdisciplinarity), institutional change, and setting out clear overarching principles and political goals. One major strand of discussion has centred on the concept of policy coherence: enhancing the exibility of policy systems to cope with cross-cutting issues through the integration or joining up of policy making (OECD 1996, 2002, Cabinet Oce 1999, Hood 2005, Eales

762

J. Turnpenny et al.

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

et al. 2005, Jordan and Halpin 2006). Indeed, the Organisation for Economic CoOperation and Development (OECD) (1996) identied a gap between coherence and the capacity to achieve it, called for more information and analysis, and set out tools of coherence: practical lessons, claimed, based on experience in OECD countries, to assist with enhancing policy coherence, itself one dimension of integration. The European Commission (CEC 2002b) has also produced a set of core principles to aid the integration of evidence into policy making. Integrated assessment has been dened as a process of assessing the performance of options or proposals in terms of their economic, social and environmental implications (Eales et al. 2005, p. 114). But Scrase and Sheate (2002) also set out 14 dierent dimensions of integration aimed particularly at environmental assessment, and the work of Weaver and Rotmans (2006) denes the characteristics of what they term Integrated Sustainability Assessment. In other words, there are many dierent approaches to, and characterisations of, integrated assessment, with dimensions that derive from dierent sources (normative or empirical) and foci (dierent specic sectors and levels from policy to project) mixed loosely together. However, there are some common threads. Below is a synthesis of the main dimensions of integration, together with the key questions which should be examined when assessing the degree of integration of a particular policy assessment system: . Paradigm: What types of paradigms and policy goals frame the policy assessment? This dimension relates to the overarching principles (such as sustainable development, or economic growth) which guide the framing of problems and potential solutions. It also relates to the capacity to enhance integration across sectors. High levels of integration are indicated by the capacity to consider more than a narrow sector frame of the policy assessment (for example, considering agriculture and water together). . Scope: What impacts are considered in policy assessment? High levels of integration in this dimension would be manifest in a capacity to consider a broad range of impacts (such as on social exclusion and environmental damage) within the policy assessment. . Goals: To what extent are policy objectives dened at the outset of the policy assessment? High levels of integration here would be seen in the capacity to successfully engage at an early stage in decision making. . Process: At what stage(s) and through which process is the policy assessment done? This dimension relates to the process by which assessment activity is integrated with policy making. A strong capacity to work in parallel or integrated with policy making would be indicative of high integration. . Stakeholders: What stakeholders have been involved, how and when? High levels of integration would be indicated by the capacity to engage with multiple stakeholders, address conicts, identify inconsistencies, and integrate a variety of stakeholder perspectives into assessment. . Trade-os: How does policy assessment conceptualise and treat trade-os? The capacity to be systematic and explicit about identifying trade os between various parts of the assessment such as economic and environmental aspects would be characteristic of high integration in this dimension. . Learning: Does policy assessment involve learning? If so, what kind is it and who learns what? The capacity of the system to learn, or to accommodate and absorb new information, in the short and longer-term, would be characteristic of high integration in this dimension.

Journal of Environmental Planning and Management

763

. Evidence: What type of evidence is used, why and how? The capacity to acquire, process and integrate dierent types of evidence, from simple reasoning through to data and more comprehensive research, including various assessment tools such as scenario analysis would be characteristic of high integration in this dimension. The level of integration of evidence into the assessment process would also be potentially high. Having set out what we mean by integration, we now set out an analytical framework for considering dierent institutional settings within which these dimensions of integration may occur. 2.2. Analysing assessment in its institutional setting In line with the New Institutionalism, we take institutions to include organisational structures and interactions, routines and procedures, norms and conventions of behaviour, habits and belief systems, as well as the formal apparatus of the state (John 1998). This paper is specically concerned with public policy-making institutions, including those which govern economic markets or the management of natural resources (Ostrom 1990). Institutions are viewed as relatively stable and often dicult to change (Steinmo et al. 1992, Allison and Zelkow 1999), and hence they are claimed to be an independent factor aecting political behaviour (John 1998, p. 58). They do change, although behaviour in new situations is nevertheless related to existing norms and routines. What does this mean for development of integration within policy assessment? We are concerned in particular with institutional aspects relating to the uptake of assessment in the policy-making process. First, it is important to understand how procedures such as policy assessment, themselves emerging institutions, relate to existing institutional rules. These rules may be considered partly stable and partly changeable through conscious design as well as through gradual evolution. But the drive for more integration, as an act of linking together rather than a following of accepted practices, is thus likely to encounter institutional opportunities and constraints. We approach this challenge by exploring (through the dimensions of integration set out above) the professional capacities and institutional arrangements which aect: the ability of a political system to engage with integrative assessment activities; the take up of policyanalytic knowledge generally; and how participant organisations coordinate their processes vertically and horizontally (Peters 1998, Jordan and Schout 2006). These are functions of the formal and informal properties of the administrative system itself as well as its relationship to society and actors around it (Hanf and Underdal 1998). Such empirical evaluations can easily become immensely complex, so we have made some simplications by examining assessment activity at dierent institutional levels. The concept of levels has been used extensively in the literature; for example, Kiser and Ostrom (1982) propose three levels at which decisions are taken (operational, collective choice and constitutional levels), which identify successively wider engagement with institutions and the institutions that govern institutions. We dene our levels of analysis thus: The micro level is concerned with the individuals involved with producing the assessments, their behaviour and the constraints which bear upon them. In particular, the authors: . Assess the resources (time, money, sta) available to the assessment process with a focus on the human resources (levels and types of expertise, training, background and skills of policy ocials and the suppliers and users of the assessment).

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

764

J. Turnpenny et al.

The meso level is concerned with the organisational level, namely organisational procedures and management structures, systems of knowledge transfer, norms and incentive structures. In particular, we: . Assess the organisational norms and culture in terms of attitudes towards sustainability issues and towards the broader role of knowledge in policy making; . Assess the formal and informal decision rules that guide decision making, including incentive and reward structures as well as rules that guide the treatment and use of knowledge in policy making; . Assess coordination procedures for preparing the policy, or, more specically, use of knowledge/data/evidence in the coordination processes within and between organisations for reaching decisions and follow up, such as institutional memory, databases, and communication channels for external and internal consultations; . Assess political leadership, that is the commitment and vision of appointed leaders and their lines of command and organisational motivation. Finally, the macro level is concerned with the wider context, including linkages with broader values, norms and societal goals, and connections with the larger policy network of stakeholders. In particular, we: . Assess the network of stakeholders concerned with the process, their interests, goals, concerns and strategies, taking into account both formal and de facto relationships, and the role of knowledge/data/evidence in the strategies of stakeholders aiming to inuence the decision-making process . Assess the administrative/legal context of the assessment process: including policy objectives set by governments as well as formal restrictions, law, regulation and procedures in relation to using knowledge/data/evidence. The macro/constitutional/higher order level is sometimes seen as providing the overarching societal structure within which decisions at other levels are taken, However, in reality this is very much an empirical question. Using our framework situates micro-centred aspects in the context of deeper structures without any prior assumption about causality. We explore the dimensions of integration using the three-level framework for every case, in each jurisdiction, to identify and characterise the main factors aecting the level of integration in the assessment. The following section presents a synthetic interpretation of these results to yield general lessons about institutional capacities and constraints to more integrated assessment, using examples from the four jurisdictions. Inter-jurisdictional dierences in styles of policy assessment need to be borne in mind when discussing institutional capacities, since they make comparison more dicult. With this caveat in mind, what generic institutional capacity problems emerge across jurisdictions which constrain assessment processes? We use the levels concept to structure the synthesis, while at the same time noting the links and overlaps between levels, and indicating the appearance of the dimensions of integration across dierent levels. 3. Institutional capacities and constraints in practice 3.1. Characteristic of assessment systems by jurisdiction The EUs integrated impact assessment (IA) procedure was formally initiated in 2003 and applies to all policies in the European Commissions Annual Work Programme. Its aim is

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

Journal of Environmental Planning and Management

765

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

to assess environmental, social and economic impacts of policies early on in the policy process. Impact Assessment replaced, and aims to integrate, all sector-based assessments following a Communication on Impact Assessment (CEC 2002a). The declared aim is to improve the quality and coherence of the policy development process (p. 2). In practice, this includes sustainable development, better regulation and better governance goals, the opening up of decision making with more public participation, and the use of dierent policy instruments (CEC 2001). In the UK the main national policy assessment system is a form of integrated Regulatory Impact Assessment (RIA). The system reects the importance of the UKs better regulation and modernising government agenda, specically minimising regulatory impacts on business and the voluntary sector (Russel and Jordan 2007). However, since 2004 RIA has, among other things, included sustainable development impacts in line with commitments made in the UKs Modernising Government White Paper (Cabinet Oce 1999). This UK system is therefore the one that has most in common with the EU policy assessment system. However, there is no requirement in the UK to draw on EU IAs, even though approximately half of UK legislation relates to the implementation of EU policy. In Germany, the main formal assessment system is assessment of the eects of law (Gesetzesfolgenabschatzung) as set out in the Joint Rules of Procedure of the Federal Ministries (BMI 2000). This framing is a procedure concerned with a rather narrow assessment of legal, administrative and budgetary aspects of proposed new laws. On paper, it denes fairly pragmatic assessment requirements similar to those in the UK RIA assessment system. However, in practice assessment is a narrow, often formalistic analysis of direct economic and administrative costs, which is typically carried out after the lead ministry has dened its position. But in addition, there are other signicant assessmenttype activities at federal level which are in many ways the exact opposite of RIA: informal, not transparent to outsiders and do not follow a set procedure, but they can be extensive and are often supported by analytical methodologies and specically commissioned research. These activities occur in relation to all types of policy instruments, including strategy documents and soft policies. They are largely initiated and managed by the responsible policy unit (which is also in charge of the formal assessment) and usually considered as the actual assessment process. In Sweden, assessments take place within temporary Committees of Inquiry which propose and prepare policy, and report to the sectoral ministries. These committees are only semi-connected to the government and can be composed of parliamentarians and/or agency ocials and experts, and often bring in actors from outside the government, such as the industrial organisations, academia, labour unions and non-governmental organisations. Committees also rely on agencies as well as external expertise to provide assessments and research to the committee.

3.2.

Micro-level constraints

3.2.1. The background of ocials (includes discussion of the Scope and Evidence dimensions) The background of government sta constitutes an important micro-level constraint on the use of policy assessments, and can have a particularly marked eect on the scope of the impacts covered. Thus, a legal training predisposes policy ocials to focus on whether the proposed interventions are within the realms of the constitution and whether they are legally possible to implement. Sta can struggle to meet a very broad set of expert

766

J. Turnpenny et al.

requirements for the assessment; for example, one ocial said cost-benet analysis sounds long and terribly complicated.1 The end product tends to be in line with the educational background of policy ocials and their designated experts (whether external or internal). 3.2.2. Support networks (includes discussion of the Process and Learning dimensions) Both policy ocials professional experiences and established social networks within and outside government are key factors for eective uptake of the assessment in the policy process. This includes rather less tangible factors such as: Policy ocers personalities . . . inuence their approach.2 The length of time spent as an ocial is hence important in establishing the range and depth of social networks. In cases when in-house expertise was used to carry out, or closely liaise with, assessment work, learning was on balance enhanced, and the assessment better supported the decision making. 3.2.3. Knowledge, time and money (includes discussion of the Process, Stakeholders and Evidence dimensions) Resources are a constraint to a varying extent. Many policy ocials complained about lack of time and money because of the urgency and short interval available for policy ocials to give their input (see macro-level below). Respondents particularly in the EU and UK report increased time pressure to deliver policy proposals, and that there is less time for analysis, reection or strategic thinking. A typical opinion expressed was: I had no time for training . . . the [guidance] website has links to other websites but I dont really have time to look at it.3 There are several consequences stemming from these constraints. Data are often available only through stakeholders, which can introduce bias to the assessment process if familiar support networks are relied upon for evidence. There is also a clear limit to the uptake of new assessment methods and tools that could have made for a more integrated analysis of results. But this does not mean that knowledge is not used or appreciated. There can be a high level of experience and expertise in the policy units in, for example, German federal ministries, enabling them to assimilate and critically evaluate external knowledge. In Sweden, Committees often function as knowledge providers, reviewing existing knowledge and turning this into policy-relevant information. In Germany, the demand for knowledge (both formal analysis and more ad hoc information and evaluation from external experts, companies, regions or other stakeholders), is clear in the often extensive non-mandatory informal assessment-type activities.

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

3.3. Meso-level constraints 3.3.1. Legal and organisational traditions (includes discussion of the Paradigm, Scope and Evidence dimensions) Organisational traditions constitute major barriers to the eective use of policy assessments. In Germany and Sweden a legalistic tradition within ministry decision making encourages legal conformity, whereas the quasi-scientic inquiry and economic approach involved in impact assessment and evaluation are not highly valued functions and typically reside with the more junior sta of the ministries. In Sweden, this is partly sidelined through the engagement of expert committees; the governmental sta plays a smaller role in the assessment side of policy preparations. Another generic issue relates to organisational set up of ministries and (in the EU) Directorates-General (DGs). All

Journal of Environmental Planning and Management

767

jurisdictions have relatively far reaching inter-departmental coordination procedures, although these are actually used in very dierent ways (e.g. far more extensively in Sweden than in the UK). There are still strong silo mentalities where each department or ministry tends to focus on its own interests, and strategic objectives: Whats environment got to do with us? Isnt that [the environment ministrys] responsibility?4 3.3.2. Quality controls (includes discussion of the Process dimension)

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

Specications for assessment procedures are in general relatively weak. However, at the EU level, and to some extent in the UK, there appears to be more attention to, or concern for, assessment quality. The EU has commissioned several research projects and external consultancies to evaluate and bolster policy assessment practice. In 2006, an Impact Assessment Board consisting of Director-level ocials was established to scrutinise impact assessments and provide advice and issue opinions on the quality of the IA work (CEC 2006). In the UK, Better Regulation Units in each ministry, Cabinet Oce guidance and highlevel quality control mechanisms (such as the Panel for Regulatory Accountability chaired by the Prime Minister) try to ensure that RIA procedures are followed correctly. However, these mechanisms do not generally analyse the quality of the substance of the assessments especially in non-economic areas. In Sweden and Germany, it is notable that this step is missing altogether with the exception of occasional research reports and ad hoc evaluations. 3.3.3. Dierent perspectives on assessment (includes discussion of the Process and Learning dimensions) There is a strong tendency across all jurisdictions to view assessment procedures as largely irrelevant formalities. They are often viewed as something to be undertaken near the end of the policy process; an imposition rather than a helpful aid to decision making. As one of the interviewees put it: RIA is currently an end-of-pipe activity. The process should start early . . . but this is not the reality. Instead, something is decided, the regulation is drafted and afterwards an assessment is done.5 In the EU, certain types of activity, such as Action Plans, or certain areas of competence, such as legal issues, are not seen as appropriate for a common-format assessment. This results in ocials questioning the assessment procedure. In Sweden, this also appears to be a function of the legalistic tradition, whereby the gathering and processing of systemic knowledge about impacts and consequences is seen as a junior undertaking with little connection to real decision making (Eckerberg et al. 2007). However, decision-relevant forms of assessment can be created because the assessment is an integral part of the committee procedure. This leads to a more professionalised (including learning among participants) and uncontained assessment, but with the disadvantage of being disconnected from those preparing (policy ocials) and making (state secretaries and ministers) the decisions in the ministries. 3.3.4. Politics and analysis (includes discussion of the Process, Goals, Trade-Os, Learning and Evidence dimensions) Across all jurisdictions, respondents noted that decision making is a political process and that this is sometimes informed by but not necessarily strongly related to analytical activities. One remark heard many times was: ultimately, decisions are made politically and these depend on rationalities that dier from technical rationality.6 This is perceived to be a constraint to assessment uses. In the EU, IAs are introduced late in the overall

768

J. Turnpenny et al.

policy process, and hence have limited potential to contribute to the formation of (political) visions. Although IA aspires to be ex ante and hence early in the process, policies do not simply appear with no history behind them (e.g. see Dery 1998). In eect, they are triggered by, and are hence inextricably linked to, earlier policies, initiatives and actions by EU institutions and international commitments. There is a strong link to ocials attitude to formal tools and analysis. In the EU, the guidance on the use of assessment methods aims to encourage less detailed and timeconsuming forms of assessment. Thus, the use of advanced tools (e.g. Integrated Assessment Models) is downplayed, while simple tools like Causal Models and Impact Matrices are described in detail in the IA Guidance (CEC 2005a). The 2005 guidelines emphasise identifying most important impacts using simple tools, then analysing these selected ones in more detail. There is a signicant capacity for detailed studies and consultation in some cases, but in many cases, the limits of assessments are determined by whether the analysis is deemed proportionate and it is not often clear how such a decision was reached. There is also emphasis on quantication/monetisation. However, a major constraint appears since there is often no way of assessing causal eects of policy, especially in the social sphere. In the UK, RIAs tend to focus on fairly narrow implementation options once the policy direction has already been formulated, and hence has limited impact on strategic policy direction. Often the content was not overly analytical, and the RIAs were geared towards delivering the preferred outcome of ministers rather than conducting thorough analysis. In Sweden, there is still no clear guidance on tools or even approach to impact assessment. Analytical activities are not seen to belong in policy making but are rather disconnected. In Germany, there is a strong scepticism towards formal analysis methods in general and economic tools (cost-benet analysis and economic modelling) in particular. Policy ocials often consider their expertbased evaluations (drawing on professional experience, rules of thumb, advice from colleagues and external specialists etc.) as superior to formal analysis using complex methods: Of course we also look at specic models . . . and say what rubbish is this? This and this and this is wrong. It would have to be done completely dierently and our instinct is that if it is done dierently, the results should be dierent.7 These factors have contributed to a situation where considering formal assessment as, literally, a formality or as an irrelevant obligation is reinforced by organisational norms and routines. Indeed, key political actors do not appear to have an interest in more formalised and transparent assessment practices. Leaders often see formal assessment as restricting their discretion as ministers or parliamentarians. Meanwhile, policy ocials in ministries (e.g. heads of unit or department) tend to see assessment as a possible hindrance to their eorts to push a proposal through the legislative process. For the ministry in charge of the policy, a lack of transparency means more freedom to internally acknowledge potential negative eects (or uncertainties about eectiveness) without the risk that this information is instrumentalised by actors opposing the policy: Formal methods take away exibility and make it harder to avoid the clis that emerge from the political process. . .they also make it more dicult to disguise and deceive and this is undesirable in dicult cases.8

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

3.4. Macro-level constraints 3.4.1. Priorities and paradigms (includes discussion of the Paradigm and Scope dimensions) Our analysis reveals the dominance, particularly in the EU and UK cases, of an economic growth paradigm. This constrains in particular the coverage of issues in assessments,

Journal of Environmental Planning and Management

769

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

which focus principally on administrative burden and economic costs rather than environmental impacts. Recent EU guidelines, for example, (CEC 2005a) have sought to tie IA much more strongly to the Barroso Commissions main priorities namely Better Regulation and Growth and Jobs (CEC 2005b), in spite of a continuing commitment to the Sustainable Development Strategy (CEC 2005a, p. 5). In Germany, the paradigm was less pronounced but recent political eorts at deregulation, coupled with political priorities to address low economic growth, unemployment and crises of public nances have still resulted in a lower level of attention to social and environmental impact analysis. In Sweden, the overarching policy paradigm is not a major constraint as it is in the EU and UK. Instead, each ministry may set the overarching parameters and priorities; to some extent these can deviate from more overarching central priorities. 3.4.2. EU policy and other high-level constraints (includes discussion of the Paradigm, Scope, Process and Trade-Os dimensions) A signicant proportion of policy processes at national levels relate to the implementation of EU policies (and constraints from international agreements), resulting in a recurrent pattern of macro-level constraints across the national jurisdictions. The shift of agendasetting power to the EU level is marked (e.g. Krisei et al. 2006), and frequently, policy assessment processes become legal exercises because many strategic parameters are already set through EU directives. In Germany, even in the case of upstream informal assessment activities (as opposed to formal assessment of the eects of law), our case studies show that learning processes are severely constrained by the policy context, for example, through the existing legislation, the EU policy framework, international obligations and party political commitments. In Sweden, constraints on the role of assessment are enforced both formally through the written instructions to the Committee and informally through the continuous contacts between the secretariat of the Committee and the relevant ministry: there is also a continuous dialogue with the client, during the work, verifying, so that the most important is included.9 The ministry has a certain agenda that permeates the Committees eorts at knowledge assimilation. The tightness of connection between the committee and the ministry therefore constitutes one of the most interesting and decisive variables when analysed in terms of the formal and informal instructions communicated to the Committee. In the UK, similar constraints appear from Ministerial decisions which limit the scope of the assessment. Paradoxically, in spite of the clear strategic steer and limitation of assessments scope, within specic assessments there is a lack of explicit conceptualisation of what the key issues are or should be. Within the boundaries above, the policy ocial is largely responsible for highlighting the particular issues s/he thinks most important, leading to very partial or even partisan analyses. 3.4.3. The patterns of consultation (includes discussion of the Stakeholders dimension)

In the EU, the same short list of organisations come to represent the public interest at the European level. In the EU and UK, good public participation is dicult to achieve, both because of low public awareness of where to nd live consultations, and because the usual suspects are much better resourced and more inuential, with established channels of communication with senior policy makers. One ocial said: our public internet consultation gained a very small number of responses people dont seem to care about the issues! But of course, its more [about] who knows where the consultation is.10 This acts as a barrier to considering other frames and broader sets of impacts and issues, and a

770

J. Turnpenny et al.

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

barrier to learning and to engaging with multiple stakeholders. However, in the UK, the requirement to publish stakeholder submissions should in theory enable scrutiny of the policy-making process. Sweden has a consultative approach that varies over the preparation phases. It is directed to receive input from organised stakeholder groups and rarely addresses the public as such. In the early phases of the assessment, expert and interest group consultations and hearing is the norm, while in later stages the process becomes more closed. However, after publication of the assessment, the process opens up again for a structured system of input from a wide variety of stakeholders, before the ministry nalises the bill. In Germany, the lack of transparency shelters the procedure from public scrutiny, with negative impact on the quality of analysis. Informal consultation with key stakeholders takes place outside the context of the procedure. A policy analyst observed: The process is not satisfying, even if as an association we cant complain about a lack of impact . . . we write our position papers and meet the responsible ocer in the ministry to discuss the issue . . . there is always somebody who sends it [the draft law] to us whether or not they are ocially classied as condential. It is a half-open process, but only for key players. This makes it more dicult to come to make the process more factbased.11 4. Discussion Evidently there are dierences in policy assessment-related institutional systems across the four jurisdictions, principally in their respective level of integration ambition. The UKs RIA system is principally aimed at reducing the regulatory burden on business, but also considers sustainable development impacts; the EUs aims to include both sustainable development and better regulation goals; Germanys rather narrow assessment systems focuses on the eects of law; and Swedens committees of inquiry remain semi-connected to the government. Through dening multiple dimensions of integration, and a three level institutional model, this paper has sought to reveal a fuller picture of what enables and constrains integration of assessment in practice. This nal section synthesises the main conclusions of the analysis, relates these to the models of evidence in policy making introduced earlier, and identies possible lessons for the design and practice of more integrated forms of assessment. Our analysis has shown that barriers to dierent dimensions of integration appear at micro, meso and macro levels, and in all four jurisdictions. But the analysis has also demonstrated that policy assessment systems do not appear to be particularly integrated in or across any of the dimensions. This broad conclusion hides a rather more rich set of underlying conclusions. The concept of integration is not monolithic: one dimension may be more developed than others, and integration in practice in one dimension does not necessarily lead to integration in others. For example, in the EUs IA system, a wide range of actors often engage with policy assessment activities, but this does not mean that environmental, social and economic considerations will be integrated too. Indeed, in some cases, this may be less likely with a wider range of participants (e.g. Kidd and Fischer 2007), due, for example, to a wider spread of conicting views and a resort to lowest common denominator policy outputs. Paradoxically, this spread of conicting views was particularly pronounced in the EU environmental policy cases. In fact, across the jurisdictions, integration is particularly weak in the consideration of non-economic impacts in assessment, and the integration of assessment itself into policy making. This is

Journal of Environmental Planning and Management

771

partly due to diering integration ambitions in the dierent jurisdictions. But even where the desire to integrate is stronger (e.g. EU and UK), integration in practice remains weak. Our empirical work has not directly tested the hypothesis that more integrated policy assessment leads to more coherent policy outputs. This would require a more longitudinal empirical study. However, we may tentatively conclude that since the integration of assessment into (or its inuence on) the policy process is often rather limited and narrow, and that the silo culture of policy making is still a signicant constraint to integrated policy assessment, the role of assessment in joining up governance is likely to be small. Indeed, incoherence may be a fundamental and potentially ineradicable feature of policy making where there are so many competing priorities (Jordan and Halpin 2006). In spite of the dierences in political history and institutional arrangements between jurisdictions, many of the constraints to the execution and use of integrated policy assessments are common to a majority, or all, of them. A common explanation tendered to account for weakness in integration practice relates to micro-scale institutional barriers such as lack of resources or training available to policy ocials. Evidence of such microlevel constraints appeared in all jurisdictions (particularly in Germany and EU). These include the eects of the disciplinary and professional backgrounds of the policy ocials, as well as the extent of ocials personal networks, support systems and their seniority within the policy spheres. These factors have a fundamental impact on both the substance of the assessment and its potential eectiveness in inuencing, or at least being taken up in, political deliberations. The importance of micro-level constraints is in accordance with the literature on the decit model, but, as per the initial premise, this explanation only gives a partial account. At the meso level, a signicant constraint across all four jurisdictions is the perceived role of analysis as a support for, rather than a determinant of, policy. This also forms the attitude towards using formal and advanced tools, as well as the timing and strategic importance of assessment. Other meso-level constraints that limit the use of formal tools are the perceived superiority of expert judgement, a widespread unfamiliarity with the tools themselves, and scepticism about their ability to handle value-based judgements. Linked to this, organisational traditions, such as a strong legalistic tradition, constitute barriers to integrated policy assessments. Meso-level constraints also appear in the institutional interactions and coordination activities. In spite of relatively far reaching inter-ministry coordination procedures in the four jurisdictions, there are also silo cultures where each ministry focuses on its own interests and strategic objectives, at most engaging with other sectors to avoid future major goal conicts between political programmes. At the macro level, critical constraints include previous policy commitments encoded in EU directives and international commitments. These frame the assessment of new policy proposals. Our expectation was that the decit explanations for lack of integration in policy assessment have underlying institutional drivers. Our study conrms that there are indeed institutional constraints to integrated assessment procedures operating at micro, meso and macro-levels, as well as in relation to the eight dimensions of integration, in each jurisdiction. These rules, values and belief systems constrain attempts to integrate. There are also complex and overlapping relationships between these levels (Hall 1993, Fischer 1995). For example, the operation of a policy assessment system at the meso-level is often inuenced by macro-level stakeholder input, but the systems operation also acts to shape the type, range and inuence of stakeholders involved in the rst place. Crucially, microlevel constraints such as availability of time and resources often have their roots in meso and macro-level institutions. For example, German attempts to strengthen quality control

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

772

J. Turnpenny et al.

(or in other jurisdictions, address other micro-level issues) have remained half-heartedly implemented. Lack of problem denition in assessments is a macro-level issue, but this accentuates constraints at meso and micro levels, such that organisational cultures and available expertise, rather than a substantive overall vision, determine the data used and the knowledge generated. We have seen one of the fundamental diculties associated with trying to improve the use of assessment by addressing micro-level issues such as training without also addressing higher-level ones. Resources are not made available, for example, because the assessment is not seen as proportionate. The decit view of assessment is rather simplistic in its neglect of the factors that push policy systems to operate in a manner more akin to the idea of incrementalism famously described by Lindblom (e.g. Lindblom 1979). Our analysis also raises fundamental questions about the role of rational assessment systems in the altogether messy world of policy making. While the emerging institutionalisation of policy assessment has put a much needed spotlight on political decision making, allowed public scrutiny of policy proposals and introduced some bounded forms of rationality into the decision making process, one may question whether calls for more integrated assessment ignore the basic fact that policy making tends to be accretive, incremental and ad hoc. The assumption that the use of analysis in policy making is an objective activity is also disputable, given claims that evidence is itself a social construct (Sanderson 2002) which be used by powerful actors to pursue their interests (Flyvbjerg 1998, Owens 2005). Maybe the real challenge is not addressing the decit at the micro-level, but recognising and accepting assessment for what it is an inherently political exercise. Scrase and Sheate (2002) warn of the risks attached in presenting essentially political questions as reconcilable by technical-rational methods. Our evidence suggests that relying on coercion (i.e. issuing political decrees, publishing ever more prescriptive forms of guidance and creating more intensive quality control systems) is not the most promising way to make policy assessment more integrated. The literature on (integrated) Sustainability Assessment (e.g. Gibson et al. 2005; Pope, 2006; Weaver and Rotmans, 2006) is replete with calls for assessment to support more integrated (sustainable) policy making. These include methods for integrating environmental, social and economic considerations, integrating assessment continuously throughout the policy process, and clarifying rules for making trade-os in policy decisions. We suggest that to use these methods in practice will require a surmounting of the institutional barriers, which in turn will require a rethinking of the institutional settings, purposes and methods of assessment activity, and above all, long-term engagement in the process by analysts and policy makers. Research that seeks to understand these institutional barriers by building on the public policy literature which explores potential limits to integration and policy coherence (e.g. Dery, 1998; Bogdanor, 2005; Jordan and Halpin, 2006), is a necessary rst step towards developing such methods.

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

Acknowledgements
This paper was written with the nancial support of the MATISSE (Methods and Tools for Integrated Sustainability Assessment) project, nanced under the European Commissions Sixth Framework Programme. Duncan Russel was generously funded by the UK Economic & Social Research Council ESRC (PTA-026-27-1094). The authors gratefully acknowledge the contribution of colleagues within the MATISSE consortium, especially Paul Weaver. They would also like to thank Anneke von Raggamby and Ingmar von Homeyer of Ecologic for their work on ve of the EU cases as part of the European Commissions Sustainability ATest project.

Journal of Environmental Planning and Management Notes


1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. Interview Interview Interview Interview Interview Interview Interview Interview Interview Interview Interview with with with with with with with with with with with UK Civil Servant A. EU Strategic Ocer B. UK Civil Servant C. UK Civil Servant G. German Environment Ministry ocial. German Environment Ministry ocial. German Environment Ministry ocial. German Agricultural Ministry ocial. Swedish Committee ocial B1. EU Policy Ocer J. farmers association.

773

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

References
Allison, G. and Zelkow, P., 1999. Essence of decision: explaining the Cuban missile crisis. New York: Addison Wesley Longman. BMI, 2000. Moderner Staat - moderne Verwaltung : Gemeinsame Geschaftsordnung der Bundesministerien. Berlin: German Federal Ministry of the Interior / Bundesministerium des Innern. Bogdanor, V., ed. 2005. Joined-up government. Oxford: Oxford University Press. Bulmer, M., ed. 1980. Social research and royal commissions. London: George Allen & Unwin. Cabinet Oce, 1999. Modernising government (Cmnd. 4310). London: HMSO. Cabinet Oce, 2003. Regulatory impact assessment guidance. London: HMSO. Clemens, E.S. and Cook, J.M., 1999. Politics and institutionalism: explaining durability and change. Annual review of sociology, 25, 441466. Commission of the European Communities (CEC), 2001. European governance: a White Paper. Brussels, 25 July 2001. COM(2001). 428 nal. Commission of the European Communities (CEC), 2002a. Communication from the Commission on impact assessment. Brussels. 5 June 2002. COM(2002). 276 nal. Commission of the European Communities (CEC), 2002b. On the collection and use of expertise by the Commission: principles and guidelines: Improving the knowledge base for better policies. 713. COM(2002). 713, Brussels. Commission of the European Communities (CEC), 2005a. Impact assessment guidelines. Brussels, 15 June 2005. SEC (2005). 791. Commission of the European Communities (CEC), 2005b. Better regulation for growth and jobs in the European Union. Brussels, 16 March 2005. COM (2005). 97 nal. Commission of the European Communities (CEC), 2006. The Impact Assessment Board (IAB) Mandate. SEC (2006) 1457/3. Dery, D., 1998. Policy by the way: when policy is incidental to making other policies. Journal of public policy, 18 (2), 163176. Eales, R., et al., 2005. Emerging approaches to integrated appraisal in the UK. Impact assessment and policy appraisal, 23 (2), 113123. Eckerberg, K., et al., 2007. Institutional analysis of energy and agriculture. In: M. Nilsson and K. Eckerberg, eds. Environmental policy integration in practice. London: Earthscan, 111136. European Environment and Sustainable Development Advisory Councils (EEAC), 2006. Impact assessment of European Commission policies: achievements and prospects. Statement of the EEAC Working Group on Governance, April 2006. Fischer, F., 1995. Evaluating public policy. Wadsworth. Flyvbjerg, B., 1998. Rationality and power: democracy in practice. London: University of Chicago Press. Gibson, R.B., et al., 2005. Sustainability assessment: criteria, processes and applications. London: Earthscan. Hall, P.A., 1993. Policy paradigms, social learning and the state. Comparative politics, 25 (3), 275296. Hall, P.A. and Taylor, R.C.R., 1996. Political science and the three new institutionalisms. Political studies, XLIV, 936957.

774

J. Turnpenny et al.

Hanf, K. and Underdal, A., 1998. Domesticating international commitments: linking national and international decision-making. In: A. Underdal, ed. The politics of international environmental management. Dordrecht: Kluwer Academic Publishers. Hertin, J., et al., 2008. Rationalising the policy mess? Ex ante assessment and the utilisation of knowledge in the policy process. Environment and planning A (in press). Hill, M., 2005. The public policy process. 4th edn. Longman: Harmondsworth. Hood, C., 2005. The idea of joined up government: a historical perspective. In: V. Bogdanor, ed. Joined up government. Oxford: Oxford University Press. John, P., 1998. Analysing public policy. London: Pinter. Jordan, G. and Halpin, D., 2006. The political costs of policy coherence: constructing a rural policy for Scotland. Journal of public policy, 26 (1), 2141. Jordan, A. and Schout, A., 2006. The coordination of the European Union: exploring the capacities for networked governance. Oxford: Oxford University Press. Kidd, S. and Fischer, T.B., 2007. Towards sustainability: is integrated appraisal a step in the right direction? Environment and planning C: government and policy, 25, 233249. Kiser, L. and Ostrom, E., 1982. The three worlds of action: a metatheoretical synthesis of institutional approaches. In: E. Ostrom, ed. Strategies of political inquiry. Beverly Hills: Sage, 179222. Kriesi, H., Adam, S., and Jochum, M., 2006. Comparative analysis of policy networks in Western Europe. Journal of European public policy, 13 (3), 341361. Lee, N., 2002. Integrated approaches to impact assessment: substance or make-believe? In: Environmental Assessment Yearbook 2002. Manchester: EIA Centre, University of Manchester. Lee, N., 2006. Bridging the gap between theory and practice in integrated assessment. Environmental impact assessment review, 26, 5778. Lindblom, C.E., 1979. Still muddling, not yet through. Public administration review, 39 (6), 517 526. March, J.G. and Olsen, J.P., 1989. Rediscovering institutions: the organizational basis of politics. New York: The Free Press/Macmillan. National Audit Oce (NAO), 2006. Regulatory impact assessments and sustainable development. London: NAO. Nutley, S., Walter, I., and Bland, N., 2002. The institutional arrangements for connecting evidence and policy: the case of drug misuse. Public policy and administration, 17 (3), 7694. Organisation for Economic Co-operation and Development (OECD), 1996. Performance auditing and the modernisation of government. Paris: OECD. Organisation for Economic Co-operation and Development (OECD), 2002. Governance for sustainable development: ve OECD case studies. Paris: OECD. Ostrom, E., 1990. Governing the commons: the evolution of institutions for collective action. New York: Cambridge University Press. Owens, S., 2005. Making a dierence? Some perspectives on environmental research and policy. Transactions of the Institute of British Geographers, NS 30, 287292. Peters, B.G., 1998. Comparative politics: theory and methods. London: Macmillan Press. Pope, J., 2006. Editorial: whats so special about sustainability assessment? Journal of environmental assessment policy and management, 8 (3), vx. Radaelli, C.M. and De Francesco, F., 2007. Regulatory quality in Europe: concepts, measures and policy processes. Manchester: Manchester University Press. Rayner, S., 2003. Democracy in the age of assessment: reections on the roles of expertise and democracy in public-sector decision-making. Science and public policy, 30 (3), 163170. Renda, A., 2006. Impact assessment in the EU: the state of the art and the art of the state. Brussels: Centre for Policy Studies. Russel, D.J., 2005. Environmental policy appraisal in the UK central government: a political analysis. Thesis (PhD). University of East Anglia, Norwich, UK. Russel, D. and Jordan, A., 2007. Gearing up governance for sustainable development: Patterns of policy appraisal in United Kingdom central government. Journal of environmental planning and management, 50, 121. Sanderson, I., 2002. Making sense of what works: evidence based policy making as instrumental rationality. Public policy and public administration, 17 (3), 6175. Scrase, J.I. and Sheate, W.R., 2002. Integration and integrated approaches to assessment: what do they mean for the environment? Journal of environmental policy and planning, 4, 275294.

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

Journal of Environmental Planning and Management

775

Steinmo, S., Thelen, K., and Longstreth, F., eds. 1992. Structuring politics: historical institutionalism in comparative analysis. Cambridge: Cambridge University Press. Weaver, P.M. and Rotmans, J., 2006. Integrated sustainability assessment: what is it, why do it, and how? International journal of innovation and sustainable development, 1 (4), 284303. Wilkinson, D., 2004. The magic bullet? Is ex ante impact assessment the key to advancing environmental policy integration? Environmental policy integration. Paper 4. European Environment Agency, December. Wilkinson, D., et al., 2004. Sustainable development in the European Commissions integrated impact assessments for 2003. IEEP Final Report, April 2004. London: Institute for European Environmental Policy.

Downloaded by [b-on: Biblioteca do conhecimento online UNL] at 11:50 14 December 2011

You might also like