Collaborative Policymaking Using Human-centered AI. Part One.


Share on linkedin
Share on twitter

Are We Trying to Solve Problems by Guessing?

There’s no shortage of challenging or even existential problems facing modern human societies. As a fully global and highly technological species, we need to understand and manage myriad domains at multiple scales. Some of our most pressing concerns revolve around wealth distribution, social (in)equity, racism, conflict, economic growth, the biosphere, the atmosphere, poverty, crime, security, surveillance capitalism and health. This list is by no means exhaustive. Underneath these large banners lie tens of thousands of individual policies, programs, projects and initiatives aimed at bringing about desirable outcomes/results. Intergovernmental entities (e.g. U.N., OECD), development banks, governments (national, regional, local), non-governmental organizations, foundations, charities, and many private-sector entities focus the majority of their time and resources on trying to find effective and efficient ways to implement interventions that will achieve aimed-for results such as elevating quality of life, promoting fairness, and alleviating unnecessary suffering.

How do organizations actually go about influencing the problems that lie within their domains? The rational hope is that they use evidence to determine what works and what doesn’t. The basic idea here, which I think appeals to most people, is that it’s better to undertake something complex using the best information you can find, rather than to rely on what you think should or might work based on intuition. The scientific method, anchored in observation and evidence, is the most powerful ‘device’ we’ve ever crafted. It has dramatically transformed the world in an astonishingly short period of time. It would seem self-evident that the design, implementation and evaluation of billions of dollars’ worth of policies, programs and other interventions should benefit from the application of scientific thinking. I’m not suggesting that the scientific/rationalist approach should guide the debate about which outcomes are the most desirable — these are societal and cultural debates that rest on collective values, not science, and as such, should be part of normal democratic deliberation and debate among citizens. I am pointing out however, that once the compass has been set, that is, once specific goals have been selected, that it makes sense to bring to bear our best tools, and apply them, to ensure that limited resources are allocated in ways that maximize the likelihood of success.

Is this happening in practice? Are most organizations designing their programs and policies using a rigorous approach based on evidence, and testing them scientifically to see if they work? It may come as a shock (or not) to learn that no, most organizations are not doing this. This may be surprising to the average person, who may have at least some faith that their tax dollars, donations, or investments are being spent on interventions that are built on at least a modicum of sound reasoning. But nevertheless, even a very superficial familiarity with how governments and other large organizations function will reveal that very little effort is put into deliberate design, let alone rigorous, scientific design. I believe that many organizations are, more often than not, guessing about what works and how to get better results.

I’m not alone in making such a declaration. David Halpern is a UK scientist and founder of the Behavioural Insights Team (BIT), an organization that generates and applies behavioural insights to inform policy, and to influence public thinking and decision making. The unit was originally set up within the UK Cabinet Office to apply nudge theory within British government. Halpern was interviewed in 2018 by Apolitical, an online publication that’s been described as a social network for public servants. In an article entitled Evidence-based policymaking: is there room for science in politics? Halpern is quoted as saying the following: “The dirty secret of almost all government is that we don’t know whether we’re doing the right thing. Whether what we’re doing is helping the patient or not helping the patient — or even killing him. People are often running blind, overconfident, and dispensing massive sums of money.”

This is clearly not ideal. Why should this be the state of affairs? If officials, executives and managers who are responsible for leading, and for setting strategic direction in the pursuit of their organizations’ articulated outcomes, are genuinely interested in successfully achieving those outcomes, which there is no reason to doubt, then why would they neglect to use the best and most powerful tools ostensibly at their disposal?

Hopeful Signs

While I believe that more rigorous policymaking is a long way from penetrating deeply into organizational thinking, the ideas around using evidence to formulate policy have nevertheless steadily taken root over the last 20 years. Over that time, a lot of good work has been done in this area.

For example, the UK, which is relatively advanced in the synthesis of evidence-informed policy, currently has 9 independent What Works Centres whose purpose is to use evidence to improve the design and delivery of public services (GOV.UK., 2020).

The European Commission (the executive branch of the European Union) has in place a major program aimed at using evidence called Knowledge4Policy (K4P). Their web site states: The EU Commission’s Knowledge4Policy (K4P) platform supports evidence-based policymaking: bridging the world of policymakers — who ideally develop public policy based on sound scientific evidence — and the scientists who develop that evidence in the first place (EU Commission, 2020).

A good example of a deliberate effort to support democratic, evidence-based policymaking (in this case with a special focus on using evidence derived from people’s knowledge and lived experiences) can be found within the Organization for Economic Cooperation and Development (OECD). The OECD’s Innovative Citizen Participation Network, centers on supporting countries in the implementation of Provision 9 of the OECD Recommendation of the Council on Open Government (2017), which focuses on exploring innovative ways to effectively engage with stakeholders to source ideas, co-create solutions, and seize opportunities provided by digital government tools. It focuses on new research in the area of innovative citizen participation practices to analyse the new forms of deliberative, collaborative, and participatory decision making that are evolving across the globe (OECD, 2020).

Other examples include commitments made by the Canadian government to devote a fixed percentage of program funding to experimenting with new approaches to program and policy design and delivery. Complementing this is a Canadian federal program called Experimentation Works which encourages public servants to incorporate experimentation into their skills and practice. Evidence-based mechanisms and philosophies are integrated into policymaking frameworks in Australia, New Zealand, Estonia, Norway, Sweden, Finland, France, Germany, Austria, Switzerland and Belgium. In the United States, the Foundations for Evidence-Based Policymaking Act of 2018 was signed into law in January 2019. The law incorporates many of the recommendations of the U.S. Commission on Evidence-Based Policymaking (2017) to improve the use of evidence to inform policies and programs in the US government. The new law requires all US agencies to develop evidence-based policy and evaluation plans as part of regular business (US EPA, 2020). Other large organizations that pay serious attention to evidence, and increasingly to collaborative styes of governance, include the funds, programs and specialized agencies of the United Nations, multilateral development banks (e.g., The World Bank Group), and many large foundations such as the W. K. Kellogg Foundation Trust.

Additional strong evidence of the rapid growth of interest in finding ways to foster evidence-informed decision-making and collaborative policymaking can be found in a recent review carried out by Olejniczak et al. (2020). They inventoried “policy labs” worldwide with the intent of better-defining their purposes and functions. They found a total of no less than 146 policy labs across 6 continents, with the most in Europe (65), followed by North America (44). As you might expect, the researchers found substantial variation in terms of what these labs do. Nevertheless, the research found, broadly, that policy labs are shared spaces for collaboration, knowledge production and implementation, that they often have inter-sectoral bridging capacity, they promote government effectiveness and cultural shifts, and they support policy design processes. The fact that labs and think tanks like this are proliferating so robustly is, I believe, a strong indication that ideas around more rational policymaking are multiplying in depth and number.

These examples (and there are many others across a wide range of jurisdictions and types of organizations) clearly show that interest in using evidence and engaging in collaborative policymaking is on the rise. Despite this, I would argue that evidence-based practice, along with greater citizen participation in policymaking, are still, by far, the exceptions rather than the rule. While a more rational approach is gaining ground, more rigorous practice is still vastly unfamiliar, and under-recognized in terms value, to the majority of relevant users, be they politicians, officials, executives across the organizational spectrum, policy proponents, people carrying out work on the ground, those involved in evaluating effectiveness, or concerned citizens and interest groups.

A Barrier?

I suspect one of the chief reasons that a rational, scientific approach to policymaking has been slow to penetrate more deeply into collective strategic thinking, despite apparently wide recognition of its potential, is associated with the perception that gathering, analyzing and testing evidence in a truly rigorous way is both hard and expensive. Moreover, there seems to be a widespread notion that scientific policymaking necessarily involves utilizing an experimental approach known as a randomized controlled trial (RCT).

An RCT is a scientific experiment whose design is specifically tailored to reduce bias when testing the effectiveness of an intervention. The idea involves administering an experimental program to two or more recipient groups. The experimental group receives the intervention (the treatment) and the control group (or groups) may receive a placebo or no intervention. Then the groups are monitored for responses and results to gauge the effects of the intervention. Conducting this type of experiment supports a more objective determination of whether the changes in outcomes that are measured/observed can be attributed to the intervention, providing experimenters (and funders/investors) with stronger evidence about an intervention’s effectiveness.

Unquestionably, RCTs are powerful instruments for probing whether something works. These designs are used frequently in medicine, but are also used in other domains, including in policy and program development. The problem is that setting up and carrying out RCTs in public policy and social science settings is actually fairly difficult (and expensive). Such experiments must be carefully designed and executed. This implies that the organization doing the experiment must possess the understanding and expertise required to set up the experiment properly, execute it, and interpret the results. These experiments also take time to conduct. Assuming the appetite, resources and expertise are available to engage in an RCT, an organization must be willing to actually implement a policy or program separately to two or more groups, and for a long enough period of time to test the intervention thoroughly. This means that the organization must be able to wait for results before making decisions on the final form of the intervention and about whether to roll it out more widely. Organizations can and do leverage RCTs. But such tools are not easy to use and take time, and thus understandably pose significant barriers to organizations that operate in fast-paced and highly pressurized funding and management environments. In part two of this article, I will explain how it is possible to generate strong evidence about whether interventions work, and how they work, using concepts and technologies that are much faster and easier to employ than RCTs. RCTs may have been one of the only options for generating really strong evidence until now, but this has changed with the advent of increased computing power, modern data science and communication infrastructure. By taking advantage of these modern tools and technologies, PolicySpark is able to open doors that have, until now, been closed.


In addition to the perception that doing good science in the program/policy space necessarily involves doing RCTs (and the associated perception that this is hard), another important factor that comes into play as organizations grapple with the challenges of finding more innovative ways to solve complex problems involves the evolution of ideas around styles of governance. It’s instructive to examine how governance and policymaking have changed over time because it enables us to better understand where policymaking is going in the future, and this will inform a discussion of the kinds of tools that may be appropriate to support a real expansion of innovation in policy design, synthesis and evaluation.

The philosophy and style of democratic public governance has shifted significantly over the last century. There have been three clearly distinguishable stages in this evolution.

The very first wave of public administrative style dominated as the principal approach to public policymaking from the late nineteenth century until roughly the 1980s. The style was characterized by rules and guidelines, centralized institutional bureaucracy, hierarchy, the hegemony of the professional, and a focus on regulatory mechanisms, legislation and administrative procedures. This period of time is often referred to as the era of classical public administration (Katsamunska, 2012).

The second wave of public administrative style, which might be thought of as a reformation, commenced in the early 1980s, carried through the 90s, and is still present in some quarters. This second wave reform has been dubbed the New Public Management (NPM). This approach emphasized efficiency and the merits of running public sector organizations more like businesses, emulating private sector management models. Hallmarks of the NPM include decentralized service delivery, more autonomy for individual agencies, a focus on market dynamics, contracting out, value-for-money, financial control, a strong emphasis on target-setting and measuring efficiency, significant decision-making power within the hands of relatively few senior executives, audits, benchmarks and performance evaluations (see for example Dunleavy & Hood, 2009).

As I alluded to above, NPM is still prevalent in some contexts. In reality, what we find in practice is often a mixture of administrative styles, where demarcations among defining elements of different approaches exist along a continuum. But, on average, a new third wave of public governance reform has been emerging over the last 15 years. In a highly cited and influential article, Osborne (2006) called this approach the New Public Governance (NPG). At its core, NPG evolves away from the market-orientation of NPM, toward pluralist foundations, where multiple inter-dependent actors contribute to policy synthesis and the delivery of public services, and where multiple processes inform the policymaking system. NPG focuses on inter-organizational relationships based on trust and relational capital, as well as service effectiveness and outcomes (Osborne, 2006). The NPG embodies a new era of collaborative policymaking, and its philosophes, principles and methods are now slowly but surely penetrating into both government and non-government alike.

Many consider this movement toward collaborative policymaking to be a logical, necessary response to certain facets of the erosion of democratic processes and increasing fragmentation of interests, and toward better governance in societies where stakeholder perspective and voice are rapidly gaining influence. NPM, which is now seen by many as outdated and flawed, may be viewed as a transitional phase between classical public administration and collaborative, participatory modes of governance that put stakeholders at the center of policymaking. Importantly, NPG supports participatory inclusion in public governance as a driver of democratization (Warren, 2009) (Torfing & Triantafillou, 2013).


So far, I’ve described a world in which ideas around evidence-informed management and collaborative policymaking are advancing. These shifts are not surprising given the expanding proliferation of diverse stakeholder interests and perspectives in a world that is becoming almost exponentially more complex. But while the signals of these thought transformations may be clear if one is perusing the academic literature, it is much more challenging to find an over-abundance of real-world examples of policies, programs and interventions that have benefited from deep evidence, and/or the explicit and rigorous input of a full range of stakeholders.

No doubt there are a number of important factors impeding more widespread adoption of evidence as a pillar of governance, and still other factors slowing the embrace of full and representative participatory engagement. All such factors will come into play, and must be addressed, as we move toward approaches and systems that are incrementally more effective. My focus is on one factor in particular that I think is acting as a primary bottleneck to more robust uptake of the use of evidence and of greater stakeholder participation in policymaking systems. In the early stages of development, ideas often require practical substrates upon which they can take hold and flourish. Solutions that may hold great promise conceptually sometimes struggle to gain acceptance until methodologies and tools are developed that permit accessible implementation of the solutions in ways that are intuitive, easy to grasp and relatively convenient and inexpensive. I believe this is the case for rational, collaborative policymaking. The ideas are there, and there is uptake to a degree, but because the ideas themselves are relatively new, there exists a critical shortage of pivotal tools required for rapid, meaningful advancement.

Emerging Tools

PolicySpark’s purpose is to develop, apply and propagate these missing tools, and to seed communities of practice, so that governments, organizations and citizens find themselves better able to solve problems together. With deep background and experience in science, technology and management consulting, we are uniquely equipped to envision and engineer the solutions that have, until now, been out of reach.

In part two of this article, I outline, in detail, how we’ve developed the tools required to strengthen both the collaborative and the rational aspects of policymaking. I describe an approach that brings stakeholders together around technology that is intuitive to use and easy to understand. Our approach creates the environment necessary to ensure that stakeholders are listened to and are a meaningful part of decision-making processes. This builds the trust, relational capital and institutional credibility required to generate the buy-in that is essential for success. I further describe a toolset that supports the rapid collection, analysis, interpretation and testing of stakeholder evidence, along with other types of evidence, and uses “extended intelligence” (a type of artificial intelligence that keeps humans in charge) to transform complexity into clear results that identify which policy levers are most effective in producing desired outcomes. Moreover, I explain how these tools provide decision-makers with clear, visual, intuitive and actionable strategic information and predictive models that can be applied directly to support program design, strategic review, implementation, ongoing management, performance measurement, and evaluation/assessment.


Dunleavy, P., & Hood, C. (2009). From old public administration to new public management. Public Money & Management.

EU Commission. (2020) Knowledge for Policy.

GOV.UK. (2020) What Works Network.

Katsamunska, P. (2012). Classical and Modern Approaches to Public Administration. Economic Alternatives, 1, 74–81.

OECD. (2020). Innovative Citizen Participation.

Olejniczak, K., Borkowska-Waszak, S., Domaradzka-Widła, A., & Park, Y. (2020). Policy labs: The next frontier of policy design and evaluation? Policy & Politics, 48(1), 89–110.

Osborne, S. P. (2006). The New Public Governance? Public Management Review, 8(3), 377–387.

Torfing, J., & Triantafillou, P. (2013). What’s in a Name? Grasping New Public Governance as a Political-Administrative System. International Review of Public Administration, 18(2), 9–25.

US EPA. (2020, August 6). Foundations for Evidence-Based Policymaking Act of 2018 [Overviews and Factsheets]. US EPA.

Warren, M. (2009). Governance-Driven Democratization. Critical Policy Studies, 3, 3–13.

Collaborative Policymaking Using Human-centered AI. Part Two.

Part One Summary In part one of this article, I propose that many organizations are largely guessing about how to obtain better results, and that they fail to leverage scientific thinking (evidence) in the quest for more effective and efficient

Leave a Reply

Your email address will not be published. Required fields are marked *