EGPA CfP: Permanent Study Group XV: Public Administration, Technology and Innovation

The European Group for Public Administration (EGPA) in close collaboration with Politecnico di Milano is organizing the 2017 EGPA Annual Conference to be held from 30 August to 1st September in Bovisa (Milan). The event will be preceded by the PhD Symposium on 28 and 29 August.

This year, the PSG XV on Public Administration, Technology and Innovation (PATI) invites theoretical and empirical papers on topics related to the co-evolutionary dynamics between technological, social innovations, and public administration.

The topics we are particularly interested in include (but we also consider submissions on other related topics):

  • The outcomes of public sector innovations: what have been the economic, procedural, trust/legitimacy-related outcomes of public sector innovations resulting from either adoption of new governance practices (co-creation and co-production, living labs, etc.), or new technologies (online platforms, big data tools, other technologies of ‘smart cities’, etc.)?
  • Digital transformations and technological innovations: how are public administrations using new methods and technologies to transform public service delivery, i.e. what kind of approaches are public administrations using to abandon the traditional designs following their own internal logics and to adopt human centered approaches that move citizen needs at the center of the co-design and coimplementation processes?
  • Emergence of predictive governance and on-demand public services: how will the adoption of big data, participatory forms of governance, etc. affect the modalities of public services, i.e. will we see the shift from universal to on-demand and predictive public services and what will be the key opportunities and challenges (political, economic, ethical, technological) of such transformations?


  • Please submit abstracts via the conference website by 10 April 2017
  • The decisions will be announced by 8 May 2017
  • Complete papers should be uploaded by 1 August 2017

For queries and further information, please contact Dr. Erkki Karo, erkki.karo[at]



Prof. Rainer Kattel, Ragnar Nurses Department of Innovation and Governance, Tallinn University of Technology, Estonia, rainer.kattel[at]

Prof. Dr. Ines Mergel, Department of Politics and Administration, University of Konstanz, Germany, ines.mergel[at]

Dr. Erkki Karo, Ragnar Nurkse Department of Innovation and Governance, Tallinn University of Technology, Estonia erkki.karo[at]


International Conference Call for Papers: Innovation in Public Services and Public Policy (PUBSIC)

15 – 17 November 2017

Lillehammer University College, Norway

Keynote speakers

Sandford Borins, University of Toronto
Stephen Osborne, University of Edinburgh

Lillehammer is easily accessible by a direct train from Oslo Airport – which has excellent air travel connections across the world.

Innovation is often articulated as a panacea for addressing social and economic problems in the modern world. However; the models of innovation for public services that are posed in this context have often drawn from private sector experience in an undifferentiated way that conflates the manufacturing of products with the delivery of services, and have not taken into account the distinctive characteristics of public rather than private services.

In recent years, however, public management theory on innovation has begun to evolve, with an important body of knowledge on public service delivery emerging – for example, there was a special issue of Public Management Review devoted to public service innovation in 2014, whilst a major research programme of the European Commission on social innovation has recently been concluded (LIPSE). Important international conferences sponsored by IRSPM were also held in Shanghai and in Budapest in 2015.

Call for papers

To continue this dialogue and to build upon this evolving body of knowledge we would invite you to participate in the Public and Social Innovation Conference (PUBSIC 2017), to be held at Lillehammer University College in Norway over 15-17 November 2017. Lillehammer University College is leading the development of public and social innovation research in Norway with the support of the Norwegian Research Council and with excellent links into Norwegian public service delivery.

The International Advisory Board invites abstract proposals across the following themes:

  • Public and social innovation and ICT/digital technology (including the use of Big Data)
  • Collaboration and open innovation in public and social innovation
  • Co-production,  the co-creation of value, and  public and social innovation
  • Co-design and the role of citizens/service users in public and social innovation
  • The third and non-profit sector and social and public innovation
  • Social enterprise, social entrepreneurship, and social and public innovation
  • Managing and evaluating  public and social innovation
  • Political dimensions of public and social innovation
  • Innovation in public policy and in public policy processes
  • The roles of public employees and/or citizens in public and social innovation
  • Public – private partnerships and public and social innovation

Abstract proposals, to a maximum of 500 words, should be submitted by 15th May 2017. For more information on the proposal submission process, see the conference webpage . Panel proposals are also welcome, within one of the suggested themes. A panel is 3 – 4 connected papers and a proposal should be of maximum 1,000 words and include an overview of the panel topic (500 words) and a summary of the papers within the panel (500 words). All abstracts and panels will be reviewed by the International Advisory Board and decisions notified to lead authors by 9th June. Papers are welcome from both experienced and new/doctoral students and of both an empirical and theoretical nature.

 All conference papers will be considered for fast track review to Public Management Review (PMR) and also possibly for a special issue of PMR, if there are sufficient papers of the requisite quality.

International Advisory Board

Rolf Rønning [Co-Chair] (Lillehammer University College), Stephen P Osborne [Co-Chair] (University of Edinburgh), Gyorgy Drótos (Corvinus University, Budapest), Ricardo Gomez (University of Brasilia), Jean Hartley (Open University), Yijia Jing (Fudan University, Shanghai), Albert Meijer (University of Utrecht),  Ines Mergel (University of Konstanz), Greta Nasi (Bocconi University, Milan) Madeline Powell (University of Sheffield), Eva Sørenson (Roskilde University), and Richard Walker (City University of Hong Kong).

For further information on PUBSIC contact Rolf Ronning –

Predictive Analytics in the Public Sector

shutterstock_218879485-700x467My colleague Rainer Kattel (Tallinn University of Technology, Tallinn) and I are in the process of conducting interviews on digital transformation in the Estonian government. By coincidence we came across an interesting practice: the use of Big Data to review customs and financial data streams with the goal to reduce corruption. I wrote this up as a short contribution for the German Behörden Spiegel – a newspaper for public managers.

Here is the text (adapted from the German version – scroll down for the original text):

Big Data are Internet-generated data from online interactions of humans with websites or passive data collection by computer networks or physical sensors.The resulting data sets are usually defined as “big” because of its size, the speed in which they are generated, and the possibilities for predictive analytics and real-time insights into behavioral preferences of citizens.

Traditionally, public sector organizations are operating mostly with administratively designed and collected that results out of the direct interactions with citizens, includes other government records, and mostly includes data sets, such as open data, or other transactional data. It usually goes through an extensive cleaning and analysis process until it is made available with significant time delays (in the case of census data even years of delay). Oftentimes, the use of this ‘old’ data is used for predictive analytics to project the potential needs of citizens. Big Data however are automatically generated data sets, unstructured, and matching it with administrative data requires significant effort to match them with administrative data for the use by public managers.

Using the example of the Estonian customs and tax services, Big Data analytics can help to fight corruption in near real-time. Based on standardize cash flows, the Estonian tax and customs analysts have created risk profiles for different types of organizations. Every company is matched up with one of the profiles. These are continuously compared to cash flows and daily updates and adjustments are done in case of minor deviations. In addition to the risk profiles, so-called Key Performance Indicators in combination with additional data sets, such as banking transactions, invoices, business registers, lang register entries,, etc. In addition, data from online auction sites are used to find out if sellers are paying their sales taxes.

In case of anomalies between the expected tax incomes and the risk profiles of companies, based on a predefined algorithm, warnings are sent to the analytics team. After a first review, they decided what Information to forward to the specialists who will conduct their own ad hoch investigations. Using the analytical assessment in combination with the specialists’ experiences and assessments, a more detailed risk assessment is derived. As a result, either the risk profile is adjusted, or auditors are launching a tax examination on site on the same day.

This type of real-time analysis and timely interpretation of large-scale data sets allows the Estonian tax and customs authorities to assess information about the current tax situation and potential corruption cases in real time.

In the future, predictive analytics tool can be used to identify patterns about the health of individual companies. Predictive analytics can be used to understand the potential economic and social impact in case of impending bankruptcies. Using big data analytics can help government make more effective and efficient decisions, be potentially better prepared and act preventatively.


Here is the full text in German and a link to the article.

Korruptionsbekämpfung in Echtzeit

Big Data sind Internet-generierte Daten, die sich aus den Onlineinteraktionen von Menschen mit Webseiten und physischen Sensoren ergeben. Die resultierenden Datensätze, die allgemein aufgrund ihrer Größe, der Schnelligkeit ihrer Erstellung und den daraus resultierenden Möglichkeiten zur Echtzeitanalyse definiert werden, erlauben der öffentlichen Verwaltung Einsichten in die Bedürfnisse und tatsächlichen Handlungen von Bürgern. Sie stellen eine Kombination aus Social Media-Daten wie geteilten Videos und Fotos, likes/shares, Onlinebanking, Onlineeinkäufen, und Mobilfunkdaten dar.

Traditionell arbeitet die öffentliche Veraltung mit administrativ designten und aufwendig gesammelten Datensätzen, die vor allem aus den direkten Interaktionen mit Bürgern entstehen. Administrative Daten können einem Vorgang und individuellen Personen oder Haushalten zugeordnet werden. Beispiele dafür sind Zensusdaten, oder bisherige bearbeitete Fälle, die in Kombination mit professionellem Verständnis der Beamten für sogenannte predictive analytics dazu genutzt werden zukünftige Trends vorherzusagen. Dagegen werden Big Data-Datensätze automatisch generiert, sind unstrukturiert, und bedürfen hohem Einsatz um die Daten für die öffentliche Verwaltung nutzbar zu machen.

In Kombination können Big Data und administrative Daten dazu beitragen die Fachaufgabe der öffentlichen Verwaltung effizienter und effektiver zu gestalten. Dies zeigt sich am Beispiel der Estländischen Steuerbehörden, die Big Data-Analysen einsetzen um schnell Steuerhinterziehung zu identifizieren um möglichst noch am gleichen Tag die Ermittlungen vor Ort einzuleiten.

Die Zoll- und Finanzbeamten haben basierend auf standardisierten Finanzströmen für unterschiedliche Unternehmensformen zunächst sogenannte Risikoprofile angelegt, die mit echten Finanzdaten getestet werden, und kontinuierlich – wenn notwendig sogar täglich – dem tatsächlichen Geschäftsgebaren angepasst werden. Zusätzlich zu den Risikoprofilen dienen sogenannte Key Performance Indicators – Leistungskennzahlen – in Kombination mit den weiteren Datensätzen wie z.B. Banküberweisungen, Rechnungen, Unternehmensregister, Grundbucheinträgen. Aber auch Daten von Internet-Autobörsen werden miteinbezogen, um herauszufinden ob Verkäufer ihre Einkommen versteuern.

Sobald sich Abweichungen zu den steuerpflichtigen Finanzströmen ergeben, die dem Profil des Unternehmens nicht entsprechen, werden aufgrund der vordefinierten Algorithmen Warnungen an das Analyseteam geschickt, die die Daten mit ihrer eigenen Einschätzung an die Fachabteilung weitergeleiten. In Kombination mit den fachlichen Einschätzungen der Fachbehörden und den durch die Risikoanalyse entsteht somit eine klarere Risikoeinschätzung, die die Steuer- und Zollbehörden nutzen um weitere Schritte einzuleiten. Entweder werden die Risikoprofile des Unternehmens auf die neue Situation angepasst, so dass keine Warnungen mehr entstehen, oder Betriebsprüfer leiten Kontrollen noch am gleichen Tag ein.

Diese Art der Echtzeitanalyse und –interpretation von großen Datenströmen erlaubt es den Estnischen Steuer- und Zollbehörden Informationen über die gegenwärtige Steuersituation des Landes zu ermitteln. Zukünftig können die bereits etablierten Tools auch dafür genutzt werden um aus den in den Finanzströmen erkennbaren Mustern vorherzusehen, ob es einem Unternehmen schlecht gehen wird. Predictive analytics können dann auch dazu beitragen die Belastungen des Staates und das Aufkommen potentieller sozialer Probleme frühzeitig zu erkennen und eventuell präventiv einzugreifen – zumindest vorbereitet zu sein.


Professor Dr. Ines Mergel ist Professorin für Public Administration an der Universität Konstanz wo sie zu Themen der Digitalen Transformation der öffentlichen Verwaltung forscht und lehrt. Kontakt:

LSE Impact of Social Sciences blog: What does Big Data mean to public affairs research? Understanding the methodological and analytical challenges

The following text was originally prepared for LSE’s Impact of Social Sciences Blog and reposted here.


The term ‘Big Data’ is often misunderstood or poorly defined, especially in the public sector. Ines Mergel, R. Karl Rethemeyer, and Kimberley R. Isett provide a definition that adequately encompasses the scale, collection processes, and sources of Big Data. However, while recognising its immense potential it is also important to consider the limitations when using Big Data as a policymaking tool. Using this data for purposes not previously envisioned can be problematic, researchers may encounter ethical issues, and certain demographics are often not captured or represented.

In the public sector, the term ‘Big Data’ is often misused, misunderstood, and poorly defined. Public sector practitioners and researchers frequently use the term to refer to large data sets that were administratively collected by a government agency. Though these data sets are usually quite large and can be used for predictive analytics, administrative data does not include the oceans of information that is created by private citizens through their interactions with each other online (such as social media or business transaction data) or through sensors in buildings, cars, and streets. Moreover, when public sector researchers and practitioners do consider broader definitions of Big Data they often overlook key political, ethical, and methodological complexities that may bias the insights gleaned from ‘going Big’. In our recent paper we seek to provide a clearer definition that is current and conversant with how other fields define Big Data, before turning to fundamental issues that public sector practitioners and researchers must keep in mind when using Big Data.

Defining Big Data for the public sector

Public affairs research and practice has long profited from dialogue with allied disciplines like management and political science and has more recently incorporated insights from computational and information science. Drawing on all of these fields we define Big Data as:

“High volume data that frequently combines highly structured administrative data actively collected by public sector organizations with continuously and automatically collected structured and unstructured real-time data that are often passively created by public and private entities through their internet.”

This definition encompasses the scale of newly emerging data sets (many observations with many variables) while also addressing data collection processes (continuous and automatic), the form of the data collected (structured and unstructured), and the sources of such data (public and private). The definition also suggests the ‘granularity’ of the data (more variables describing more discrete characteristics of persons, places, events, interactions, and so forth), and the lag between collection and readiness for analysis (ever shorter).

Methodological and analytical challenges

Defined thus Big Data promises access to vast amounts of real-time information from public and private sources that should allow insights into behavioral preferences, policy options, and methods for public service improvement. In the private sector, marketing preferences can be aligned with customer insights gleaned from Big Data. In the public sector however, government agencies are less responsive and agile in their real-time interactions by design – instead using time for deliberation to respond to broader public goods. The responsiveness Big Data promises is a virtue in the private sector but could be a vice in the public.

Moreover, we raise several important concerns with respect to relying on Big Data as a decision and policymaking tool. While in the abstract Big Data is comprehensive and complete, in practice today’s version of Big Data has several features that should give public sector practitioners and scholars pause. First, most of what we think of as Big Data is really ‘digital exhaust’ – that is, data collected for purposes other than public sector operations or research. Data sets that might be publicly available from social networking sites such as Facebook or Twitter were designed for purely technical reasons. The degree to which this data lines up conceptually and operationally with public sector questions is purely coincidental. Use of digital exhaust for purposes not previously envisioned can go awry. A good example is Google’s attempt to predict the flu based on search terms.

Second, we believe there are ethical issues that may arise when researchers use data that was created as a byproduct of citizens’ interactions with each other or with a government social media account. Citizens are not able to understand or control how their data is used and have not given consent for storage and re-use of their data. We believe that research institutions need to examine their institutional review board processes to help researchers and their subjects understand important privacy issues that may arise. Too often it is possible to infer individual-level insights about private citizens from a combination of data points and thus predict their behaviors or choices.

Lastly, Big Data can only represent those that spend some part of their life online. Yet we know that certain segments of society opt in to life online (by using social media or network-connected devices), opt out (either knowingly or passively), or lack the resources to participate at all. The demography of the internet matters. For instance, researchers tend to use Twitter data because its API allows data collection for research purposes, but many forget that Twitter users are not representative of the overall population. Instead, as a recent Pew Social Media 2016 update shows, only 24% of all online adults use Twitter. Internet participation generally is biased in terms of age, educational attainment, and income – all of which correlate with gender, race, and ethnicity. We believe therefore that predictive insights are potentially biased toward certain parts of the population, making generalisations highly problematic at this time.

In summary, we see the immense potential of Big Data use in the public sector, but we also believe that it is context-specific and must be meaningfully combined with administratively collected data and purpose-built ‘small data’ to have value in improving public programmes. Increasingly, public managers must know how to collect, manage, and analyse Big Data, but they must also be fully conversant with the limitations and potential for misuse.

This blog post is based on the authors’ article, ‘Big Data in Public Affairs’, published in Public Administration Review (DOI: 10.1111/puar.12625).

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the authors

mergelInes Mergel is full professor of public administration at the University of Konstanz’s Department of Politics and Public Administration. Mergel focuses her research and teaching activities on topics such as digital transformation and adoption of new technologies in the public sector. Her ORCID id is 0000-0003-0285-4758 and she may be contacted at

rethemeyerKarl Rethemeyer is Interim Dean of the Rockefeller College of Public Affairs & Policy, University at Albany, State University of New York. Rethemeyer’s primary research interest is in social networks and their impact on political and policy processes. His ORCID iD is 0000-0002-5673-8026 and he may be contacted at

isett_portraitKimberley R. Isett is Associate Professor of Public Policy at the Georgia institute of Technology. Her research is centred on the organisation and financing of government services, particularly in health.  Her ORCID id is 0000-0002-7584-0181 and she may be contacted at

CfP Special Issue Agile Government and Adaptive Governance in GIQ

Special Issue on Agile Government and Adaptive Governance in the Public Sector

Governments around the world have to respond faster to citizen needs, like the expectation of 24/7 availability and personalized access to government services generated by the so-called ‘Facebook generation’. Seamless user-centric experiences on social networking suites, such as Weibo or Twitter, as well as online marketplaces such as Amazon, increase the demand for similar experiences with government services. In addition, industry trends, such as Big Data, predictive analytics methods, and Smart City approaches drive the need to create internal capacity and skill sets to evaluate, respond to, and implement new technologies and internal processes.

The previous new public management era has left many government organizations with a reduced skill set and limited capacity to upgrade their IT infrastructure. As a result, their capability to innovate has been deteriorated due to increasing incentives to outsource especially IT development and services. The rollout disaster in the U.S. was a clear indication that the role of information management experts in government is oftentimes limited to contract management tasks, such as planning and oversight. One response from government organizations is to create internal innovation labs, organize hackathons, hire Chief Innovation Officers, or try to recruit industry expertise into government.

We observe first organizational, structural, managerial, procedural, and technological changes to address the changing internal and external environments of government organizations. As an example, the UK and US governments have adopted new organizational structures in form digital services teams that are able to respond faster to ad hoc needs of their internal government clients. They have adopted an agile government approach designing software in a more information- and user-centric way that is standard in the IT industry. Once software is developed, it is shared widely across all levels of government and no longer siloed in one department. In addition, governments need to adapt to changes in their internal and external environments and create systems that allow them to scan trends and identify developments, predict their potential impact on the organization, and quickly learn and implement responses (Gong & Janssen, 2012).

This special issue therefore invites papers that address open research questions that were posed in two recent Viewpoint pieces in Government Information Quarterly by Janssen & Van den Voort (2016) on adaptive governance and by Mergel (in press) on agile government. Adaptive governance should ensure that an organization is able to deal with the changes, while protecting it from becoming unstable. The main characteristics of adaptive governance are decentralized bottom-up decision-making, efforts to mobilize internal and external capabilities, wider participation to spot and internalize developments, and continuous adjustment to deal with uncertainty (Janssen & Van den Voort, 2016). An agile government introduces user-centric software development approaches implemented together with agency-based project managers to shorten the implementation cycle, improve the outcomes of IT projects, and make sure that user needs are considered (Mergel in press).

For this special issue, we welcome conceptual, empirical, qualitative, quantitative or mixed methods research papers. Topics may include, but are not limited to, the following:

  • Conceptualization of agile government and adaptive governance, implication, benefits and theory building;
  • Specific or distinguishable agile software development approaches for governmental organization and/or digital public service;
  • Agile software development project management (e.g. Scrum method) in governmental contexts;
  • The impact of applying agile government or adaptive governance on the culture, organizational structure, business processes and individual behaviors;
  • The impact of agile government and adaptive governance on policy-making processes, including information acquisition, negotiation, policy formulation, evaluation and examination;
  • Information sharing and organizational learning in agile government and adaptive governance environments;
  • Adaptation at different levels, traceability and accountability in agile government and adaptive governance projects;
  • Principles and approaches to enable/increase adaptability;
  • Coordination/mediation mechanisms in adaptive governance;
  • Pros and cons of adaptability, barriers and drivers, challenges and opportunities, balance between adaptability, stability, and accountability;
  • In-depth and comparative case studies of agile government and adaptive governance in public sector; and
  • Whether, and how, agile development approaches lead to user-centric digital government services, processes, and applications.

Special Issue Guest Editors:

  • Ines Mergel, University of Konstanz, contact:
  • Yiwei Gong, School of Information Management at Wuhan University, contact:
  • John Bertot, iSchool at University of Maryland, contact:

Special Issue Format

Each submission is subject to a rigorous double-blind peer review process with at least two independent reviewers. Authors can contact the guest editors for additional information.

The deadline for manuscript submission: January 1, 2017 Extended Deadline until February 15, 2017


Gong, Y., & Janssen, M. (2012). From policy implementation to business process management: Principles for creating flexibility and agility. Government Information Quarterly, 29(Supplement 1), 61-71.

Janssen, M., Van de Voort, H. (2016): Adaptive governance: Towards a stable, accountable and responsive government. Government Information Quarterly, 33(1), 1-5.

Mergel, I. (in press 2016): Agile innovation management in government: A research agenda. Government Information Quarterly, 33(3), 516-523.

Using social media metrics and big data analytics for actionable insights

Generate actionable insights from social media and big data

Oftentimes social media use is seen as fundamentally different from other forms of formal organizational communication because of its speed, dynamic, egalitarian nature, but also its informal form. It’s important to align the use of social media with the organizational mission. Citizens can passively absorb the information and government can abandon other forms of publication and save taxpayer dollars. Public affairs can design campaigns to gain attention for certain issues, highlight deadlines in all phases of the policy life cycle (Lasswell 1951), or increase participation in offline behavior and online votes (increase turnout). The result might be an increased acceptance of public policy, increased inclusion and reduction of inequality of access (Thomas 1993, Bingham, Nabatchi, O’Leary 2005). Government organizations have the opportunity to diffuse misinformation and rumors, lower the costs of negative campaigning by quickly injecting correct, formal information, and bring in innovative knowledge about and from stakeholders.

Suggestions for practitioners aiming to measure the impact of their social media activities:

  1. Understand what you are trying to accomplish (increase attention, target certain constituencies, and what it looks like if you succeed with your communication) (DiStaso, McCorkindale, and Wright 2011). How are your social media activities supporting the organizational mission and to which extent do they serve you for example to become more innovative?
  2. Define a social media strategy and insights that optimize approaches to achieve the goals (Mergel 2012).
  3. Develop measures that are focusing on behavioral outcomes and not just reach. Have your posts and online interactions helped citizens change their behavior? Did they go out and participate in an initiative for which you need citizen input? Did they change their behavior by applying for a program?
  4. Display the information on dashboards that are accessible and understandable for decision makers (like the CDC’s social media dashboard) to see immediately how citizens are perceiving the information that is sent out by your organization (DiStaso, McCorkindale, and Wright 2011).
  5. Use the insights to optimize your tactics and identify actionable opportunities for program adjustments (Murdough 2009).

The following flowchart summarizes the steps outlined above:


New article published: Agile Innovation Management in Government: A Research Agenda

screen-shot-2016-09-13-at-8-00-37-amI wrote a paper based on my interviews with CTOs and digital service innovators in the U.S. federal government. The goal of the paper is to bring together the elements that lead to innovations in digital service delivery. I contrast traditional software development processes with elements of an agile innovation management approach. The result is a research framework and research questions for future explorations:

Governments are facing an information technology upgrade and legacy problem: outdated systems and acquisition processes are resulting in high-risk technology projects that are either over budget or behind schedule. Recent catastrophic technology failures, such as the failed launch of the politically contested online marketplace in the U.S. were attributed to an over-reliance on external technology contractors and failures to manage large-scale technology contracts in government. As a response, agile software development and modular acquisition approaches, new independent organizational units equipped with fast reacting teams, in combination with a series of policy changes are developed to address the need to innovate digital service delivery in government. This article uses a process tracing approach, as well as initial qualitative interviews with a subset of executives and agency-level digital services members to provide an overview of the existing policies and implementation approaches toward an agile innovation management approach. The article then provides a research framework including research questions that provide guidance for future research on the managerial implementation considerations necessary to scale up the initial efforts and move toward a collaborative and agile innovation management approach in government.
Reference: Mergel, I. (2016): Agile Innovation Management in Government: A Research Agenda, in: Government Information Quarterly, 33(3), pp. 516-523.

New paper: #BigData in Public Affairs published in PAR

screen-shot-2016-09-13-at-8-03-17-amKarl Rethemeyer, Kim Isett, and I just published a new paper in Public Administration Review with the title “Big Data in Public Affairs“.

Our goal for this article is to define what big data means for our discipline and raising interesting research questions that have not been explored yet. Here is the abstract of our article. Please email me if you can’t access the full paper:

This article offers an overview of the conceptual, substantive, and practical issues surrounding “big data” to provide one perspective on how the field of public affairs can successfully cope with the big data revolution. Big data in public affairs refers to a combination of administrative data collected through traditional means and large-scale data sets created by sensors, computer networks, or individuals as they use the Internet. In public affairs, new opportunities for real-time insights into behavioral patterns are emerging but are bound by safeguards limiting government reach through the restriction of the collection and analysis of these data. To address both the opportunities and challenges of this emerging phenomenon, the authors first review the evolving canon of big data articles across related fields. Second, they derive a working definition of big data in public affairs. Third, they review the methodological and analytic challenges of using big data in public affairs scholarship and practice. The article concludes with implications for public affairs.


Mergel, I., Rethemeyer, R. K., Isett, K. (forthcoming): Big Data in Public Affairs, in: Public Administration Review, DOI: 10.1111/puar.12625.

Award: Research stipend from IBM’s The Center for the Business of Government


IBM – The Center for the Business of Government has announced a new round of winners of their research stipends. I won an award to write about my research on digital service transformation in the U.S. federal government.

Here is the announcement text:

The Center for The Business of Government continues to support research by recognized thought leaders on key public management issues facing government executives today.

The Center for The Business of Government continues to support reports by leading thinkers on key issues affecting government today.  We are pleased to announce our latest round of awards for new reports on key public sector challenges, which respond to priorities identified in the Center’s research agenda. Our content is intended to stimulate and accelerate the production of practical research that benefits public sector leaders and managers.

My report will focus on the following topic: “Implementing Digital Services Teams Across the U.S. Federal Government”

In 2014, the White House created the U.S. Digital Service team and the General Services Administration’s 18F group. Both groups are using agile software development processes to design and implement high-profile software projects. The results of this report include lessons learned during the scaling up efforts of digital service teams across the departments of the U.S. federal government. These will focus on managerial design aspects, organizational challenges, motivations of digital swat teams and their department-level counterparts, as well as first outcomes in the form of digital service transformations in each department. This research report aims to support the presidential transition team’s efforts by outlining the current efforts of scaling-up digital service teams and their lessons learned, as well as observable outcomes of digital service teams across the U.S. federal government.