Conference PaperPDF Available

Co-creating an Open Government Data Driven Public Service: The Case of Chicago’s Food Inspection Forecasting Model


Abstract and Figures

Large amounts of Open Government Data (OGD) have become available and co-created public services have started to emerge, but there is only limited empirical material available on co-created OGD-driven public services. To address this shortcoming and explore the concept of co-created OGD-driven public services the authors conducted an exploratory case study. The case study explored Chicago’s use of OGD in the co-creation of a predictive analytics model that forecasts critical safety violations at food serving establishments. The results of this exploratory work allowed for new insights to be gained on co-created OGD-driven public services and led to the identification of six factors that seem to play a key role in allowing for a OGD-driven public service to be co-created. The results of the initial work also provide valuable new information that can be used to aid in the development and improvement of the authors’ conceptual model for understanding co-created OGD-driven public service.
Content may be subject to copyright.
*Please cite this paper as: McBride. K, Aavik. G, Kalvet. T, Krimmer. R. (2018). “Co-creating an Open Government
Data Driven Public Service: The Case of Chicago’s Food Inspection Forecasting Model”. 2018 51st Hawaii
International Conference on System Sciences (HICSS). IEEE, Forthcoming.
Co-creating an Open Government Data Driven Public Service:
The Case of Chicago’s Food Inspection Forecasting Model
Keegan McBride, Gerli Aavik, Tarmo Kalvet, Robert Krimmer
Tallinn University of Technology, Tallinn, Estonia
{Keegan.McBride | Gerli.Aavik | Tarmo.Kalvet |}
Large amounts of Open Government Data (OGD)
have become available and co-created public services
have started to emerge, but there is only limited
empirical material available on co-created OGD-
driven public services. To address this shortcoming
and explore the concept of co-created OGD-driven
public services the authors conducted an exploratory
case study. The case study explored Chicago’s use of
OGD in the co-creation of a predictive analytics
model that forecasts critical safety violations at food
serving establishments. The results of this exploratory
work allowed for new insights to be gained on co-
created OGD-driven public services and led to the
identification of six factors that seem to play a key role
in allowing for a OGD-driven public service to be co-
created. The results of the initial work also provide
valuable new information that can be used to aid in the
development and improvement of the authors’
conceptual model for understanding co-created OGD-
driven public service.
1. Introduction
In current e-Government literature, there are two
topics that have been receiving increased interest and
focus: open government data (OGD) and co-creation
[1]. Increasing evidence is appearing on OGD’s
benefits and potential [2] as well as the barriers
preventing its usage [3], [4]. The second topic, co-
creation, emerges from the concept of Coproduction,
brought into the spotlight by Elinor Ostrom in 1972
[5]. A previous paper has linked these two topics and
discussed the idea of a “co-created OGD-driven public
service” [6]. This concept emerges from a new
understanding of what a public service is, “public
services are any services which are offered to the
general public with the purpose of developing public
value, regardless of the role that the public sector plays
in the process” [6], [7].
The co-created OGD-driven public service has two
main components. Firstly, when talking about the co-
creation of new public services, co-creation may be
understood as the involvement of outside, non-typical,
stakeholders in the initiation, design, implementation,
and evaluation of the public service [6]. There is a
difference between coproduction and co-creation, this
was highlighted in a recent work where it was stated
that “all public services are coproduced, but not all
public service are co-created” [8]. In co-created OGD-
driven public services, the process in which co-
creation takes place must be understood, for this
purpose the framework put forth by [6] is used to
provide an initial understanding.
Another interesting concept that allows a bridge to
be built between the concepts of OGD and co-creation
is the notion of Government as a Platform (GaaP).
GaaP as a means for understanding the relationship
between OGD and co-creation was brought forth by
Linders in [9]. The core idea behind GaaP is that there
is a large amount of governmentally held and
generated data, dissemination of said data is becoming
less difficult, and that this data is able to aid and drive
the creation of new and innovative activities [9]. In the
GaaP model, the government is providing OGD and
this data may be used or exploited by any actor or
stakeholder to create public value. This use and
exploitation of the data may be understood as co-
creation as the government is providing the data and,
if the resulting applications produce public value, a
new public service has been driven by OGD and was
There has been a clear increase in interest in the
topics of co-creation and OGD, and some authors have
worked on further conceptualizing the relationship
between the two ideas as well as provided some
understanding of how co-created OGD-driven public
services may come into being [10]. However,
currently, there is limited empirical work that looks at,
and examines, co-created OGD-driven public services
in the real world. There are two reasons for this, the
first is because it is a relatively new concept, and the
second is due to a general lack of real-world examples
of co-created OGD-driven public services. This is an
interesting research gap and it was further explored by
conducting an analysis of data-analytics and OGD
programs; an empirical example has the potential to
aid and assist the current understanding of co-created
OGD-driven public services. Because of this analysis
and exploration, an interesting example made itself
known. The service involved multiple stakeholders
(city governmental agencies, private sector
companies, NGOs, and citizens), was developed using
open source code and is still freely available, heavily
utilized OGD, and convincingly produced public
value. Additionally, previous work has been done on
OGD in the selected city that demonstrated the
effectiveness of the OGD portal there [11]. This
combination of factors seemed to allow the service to
be titled a co-created OGD driven public service and
it was thus selected for further analysis.
This paper aims to explore Chicago’s use of OGD
for a new predictive analytics model that allows the
Chicago’s Department of Public Health (CDoPH) to
forecast critical safety violations at food serving
establishments. Because of the exploration, new
insight has been gained that can later be used to further
develop the understanding of co-created OGD-driven
public services. The importance and relevance of this
case was summed up in a succinct manner by Tom
Schenk, the Chief Data Officer of Chicago, in a report
he authored on the service: “collaboration was a key
component of this project… and each variables used
in the model was available on Chicago’s open data
portal” [12]. Later in the report it was stated that “the
portal was an effective tool to allow collaborative
research”, and that “this project was able to leverage
Chicago’s key data assets: its large volume of data, the
transparency and size of its open data portal, and its
ability and willingness to conduct research to improve
city services, introduce savings, and increase
engagement with Chicago-area businesses” [12]. The
stakeholders involved in this collaborative effort were
Chicago’s Department of Innovation and Technology
(CDoIT), members of Allstate Insurance’s Data
Science Team, CDoPH, the Civic Consulting Alliance
(CCA), and, finally, citizens also have played a role in
structuring the new public service.
In order to understand better the process of co-
created OGD-driven public services, an exploratory
case study was conducted. This paper presents the
case, reflects on the process, and discusses how the
findings from the case grow and aid the current
understanding of co-created OGD-driven public
services. The paper is structured as follows. Chapter 2
will provide a brief overview of the methodology that
was used to conduct the case study; this will be
followed by a presentation of the case in Chapter 3.
Chapter 4 will discuss the findings that have emerged
from the case. During the discussion, initial
propositions will also be put forth to reflect back on
the current theory and our understanding of co-created
OGD-driven public services. Finally, in Chapter 5, the
paper will be concluded and avenues for future
research will be put forth.
2. Methodology and Conceptual Model
In the previous section, the case was briefly
introduced. It was stated that the model utilizes
multiple sources of OGD, and that collaboration
between many different stakeholders was key for this
model to be completed and implemented. It has also
been said that the OGD portal is what allowed these
different stakeholders to come together and exploit
OGD to co-create this new OGD-driven public
service. For these reasons, the Chicago food predictive
analytics model was selected as the case for this paper.
This holistic exploratory case study [13] aims to
explore the process that was undergone to move the
co-created OGD-driven public service from ideation
through development and into its current stage.
Though this may be defined as a critical and unique
case, it is still only one case thus providing a lower
level of generalizability. However, it should still allow
an initial study to be conducted that provides insight
into the inner workings of a co-created OGD-driven
public service.
For the initial understanding of OGD-driven
public service co-creation, the framework presented
by [6], will be used. The aim of this paper is to explore
a co-created OGD-driven public service and gain new
insight, but the model is presented as it allows for a
starting point in looking at co-created OGD-driven
public services. Observing the process of the case at
hand allow new insights to be gained in regards to
what factors influence the co-creation of OGD-driven
public services, potentially provide new insights into
the conceptual understanding of co-created OGD-
driven public services, and look at the different roles
stakeholders played in this process. This new insight
may then be used in future development and
improvement of the model. In order to gain initial
insight into the case, newspaper articles, source code,
and a report on the model’s GitHub page were
consulted. With an initial foundational understanding
in place, semi-structured interviews were conducted to
delve into the case and understand better the dynamics
at play.
Six semi-structured interviews were conducted
with stakeholders representing different parties; one
person was interviewed from CDoIT, CCA, Allstate,
Montgomery County, whereas two members were
interviewed from the CDoPH. These interviews were
conducted during April and May 2017 over the phone
or through Skype and lasted from between 15 to 40
minutes each; all interviews were recorded and then
transcribed. The first interview conducted was with
Tom Schenk, the main person behind the case, and
then, using snowballing, other interviewees were
selected. The interviewee from Montgomery County
Department of Innovation was selected due to the
county’s relationship with the project (Montgomery
County implemented Chicago’s code with the help of
a private sector partner), though they were not directly
involved in the initial model development.
The interview questions aimed to provide a better
understanding of the interviewee’s role in the project,
how they got involved, how the process unfolded,
what went well and what did not go well, and then at
the end participants were asked to add in any
comments that were not discussed during the
interview. The responses from the interviewees are
presented and discussed in section 4, commonly
mentioned themes and facts will be further used to
draft initial propositions on what seems to influence
the success of a co-created OGD-driven public
service, as well as what factors seem to be needed to
allow OGD-driven public service co-creation to take
2.1 Conceptual Model
A recent paper [6], has proposed that public
services can be created through an innovation process
based on the ideas of co-production and agile
development. The model, shown in Figure 1, argues
that in an environment where OGD and tools for data
analytics, exploitation, and co-production are made
widely available, any actor can take the lead and
initiate or co-create data-driven services that create
public value. To do that, it is important to focus on
service user, be agile, develop quickly, listen to the
service user, and be able to adapt quickly to changing
needs. This service innovation process can be
summarized through four points:
1. The government and citizens should be
partners at all stages from ideation to creation
to implementation of the new data-driven
public service.
2. There should be an initial release of the public
service at an early stage, or an ‘MVP’ of the
public service, which allows the cycle to be
started as quickly as possible.
3. The public service should be able to respond to
user feedback from the initial launch.
4. User input should be sought and utilized at all
stages of the public service creation.
The model argues that the traditional government-
driven top-down waterfall-like method of public
service production no longer meets the needs and
expectations of the citizens and new collaborative and
data-driven approaches are the way to go. The model
follows a four-phase cycle of open government data
driven co-initiation, co-design, co-implementation and
Figure 1:Conceptual Model [6]
3. The Case
This section will present the case of the Chicago
predictive food analytics model. It starts in section 3.1
by presenting the relevant contextual information
surrounding the case. Following this, in section 3.2,
the process of the development and implementation of
the case will be discussed in two stages, the initial
iteration (3.2) and the second iteration (3.3); these
sections will also include the role of stakeholders, the
processes of development, and the role of OGD and
co-creation). The final aspect of the case to be
presented in 3.4 is the impact of the new cocreated
OGD-driven public service and potential direction for
the future.
3.1. Context
The context surrounding the case must be
presented so that the case may be better understood.
When looking at the relevant contextual factors for
this case, there seems to be four main variables: access
to a functioning OGD portal, previous experience in
the realm of predictive analytics, a grant received from
Bloomberg Philanthropies Mayors’ Challenge, and a
law requiring the inspection of establishments that
serve food. These factors form the core contextual
foundation for the case and their importance is
presented in the following paragraphs
Chicago’s OGD portal was initially developed in
2010, but in 2012 its importance was reinforced by an
order issued by Mayor Rahm Emanuel. This order
stated that Chicago must establish and maintain an
OGD portal, and that every city agency must “make
available online… at a level of granularity acceptable
to DOIT (Department of Innovation and Technology),
all appropriate datasets and associated metadata under
such agency’s control” [14]. When discussing the
motivation for this executive order, the Mayor
explained that OGD could be used to “create
application that will improve service delivery and lead
to greater quality of service for residents and more
public engagement in City government”. The OGD
portal has since grown rapidly and currently provides
access to over 550 datasets, applications built by the
city and private developers, provides tutorials on how
the available data may be exploited or analyzed,
provides tools that allow for easy visualization of data,
and has been visited over 38 million times [15]. This
portal is run and maintained by the CDoIT.
In the introduction it was discussed how the
concept of GaaP allows us to understand the
relationship between OGD and co-creation.
Interestingly, this was also pointed out by Brett
Goldstein, former Chief Data Officer of Chicago,
where he stated that the idea of GaaP is a core part of
the success of the Chicago OGD portal as they are able
to “be the platform… and support the innovative ideas
cultivated by various communities” [16].
Bloomberg Philanthropies organized a
competition that would “inspire American cities to
generate innovative ideas that solve major challenges
and improve city life and that ultimately can be
shared with other cities to improve the wellbeing of
the nation” [17]. The City of Chicago entered this
competition and was awarded a grant for one million
USD to develop a new “SmartData” platform that
would allow government agencies easier access to
predictive analytics tool; one condition of this grant
was that all software developed would be open source
[18]. Specifically, Chicago was selected to “create an
open-source platform to harness the power of data to
understand underlying trends and better direct limited
resources” [17]. This grant provided the CDoIT
funding to begin to undertake more ambitious OGD-
driven predictive analytics models.
One of the initial models that emerged from the
CDoIT was a model that could be used to predict when
and where outbreaks of rodents would occur so that
these outbreaks could be prematurely stopped [19].
This model was developed in cooperation with
Carnegie Mellon University’s Event and Pattern
Detection Laboratory and then was put into production
by Chicago’s Department of Streets and Sanitation
[19]. The model was well known throughout the City
government agencies, and was cited by some of the
interviewees as being one reason they were willing to
participate in and allow Chicago’s predictive food
analytics model to be developed.
The final contextual factor to present is the legal
requirements for inspecting establishments that serve
food. The CDoPH’s Food Protection Division is
required to perform inspections of establishments that
serve food, this authority comes from the City of
Chicago’s Food Service Sanitation Municipal Code
and the Rules and Regulations promulgated by the
Chicago Board of Health [20]. At the time of writing
this article there were around 16,000 food
establishments in the City of Chicago (there was over
15,000 when the predictive food analytics model was
initially developed) [20]. These establishments have
different requirements, but, generally, food
establishments within the city must be inspected twice
a year to make sure that they are incompliance with the
aforementioned regulations on food safety. When
inspections are carried out, one of the most important
findings is whether a critical violation is taking place.
Critical violations are those that have a high chance of
starting or spreading food borne illnesses; the presence
of a critical violation leads to a failure, the violation
must then be fixed and the establishment re-inspected
and reapproved by the CDoPH [12]. The results of
these food inspections are also freely available on the
Chicago OGD portal.
3.2. Initial Development
The City of Chicago and the CDoIT wanted to
continue to expand their use of OGD and predictive
analytics, thus increasing the efficiency of some
agencies’ day-to-day operations and provide increased
public value. In order to do this, an initial list of
potential use cases where OGD-driven predictive
analytics capabilities could be used was drafted in
2014. Though the City of Chicago was interested in
OGD and predictive analytics, the CDoIT still lacked
a full data science team, thus outside help was needed.
In order to find this outside help, the CDoIT reached
out to a local organization, the CCA. The CCA is an
organization that aims to improve the quality of life in
Chicago by bringing together stakeholders from
public, private, and non-profit sectors to work on new
and innovative solutions for problems facing the city;
the CCA roughly provides “fifteen million USD in pro
bono services every year” [21]. The CCA had relations
with the data science team at Allstate Insurance and
approached them with the list of potential use cases
from the City of Chicago. The members of the Allstate
Data Science Team had experience with the Chicago
OGD portal and knew that there was large amounts of
data on food inspections within the city. The members
also had a direct interest in the topic of food safety as
they lived in the City of Chicago and thought it would
be interesting to try to improve the food safety of the
food serving establishments within the City. Thus,
they got back to the CCA and said that they would be
willing and interested to work on developing a
predictive analytics model for the CDoPH’s food
The policy that allowed for members of Allstate’s
Data Science Team to participate in this pro bono
project is quite interesting; the company’s “bluelight”
policy allowed employees to spend up to 10% of their
working time on pro-bono data science projects [22].
The logic behind this policy is that working on non-
typical or new projects will boost their employees’
skillset and expose them to new tools and
technologies, ultimately benefiting Allstate, the
Employees, and the Partner(s) receiving their
During 2014 when the initial development began,
the City of Chicago had over 15,000 food
establishments, to inspect these establishments the city
had 36 food inspectors, and these establishments
needed to be inspected twice a year, some
establishments had to be inspected less and some
more, but twice a year seems to be the general rule.
This roughly translates to about one inspector for
every 470 food establishments, due to the high
workload and lack of optimization many critical food
violations were going unnoticed or were being
detected too late to stop or prevent outbreaks from
starting/spreading [12]. Though there was a logic to
how food inspectors were assigned, it was believed by
the CDoIT that this process could be improved and
made more effective through the adoption of an OGD-
driven predictive analytics model. Though originally
hesitant, the head of the CDoPH was willing to test a
newly developed model as she had heard about the
success of a previous model; this was the model
mentioned previously in section 3.1.
To begin, the CCA organized meetings between
the relevant parties and acted as a project manager
(Allstate, CDoIT, and CDoPH). At these meetings, the
business requirements of the CDoPH were discussed
and presented to the developers and data scientists.
Allstate’s team ended up using multiple variables from
Chicago’s OGD portal and constructed a General
Linear Model that would allow the highest risk food
establishments to be inspected first. In essence, the
model works by predicting what food establishments
are the most likely to have a critical food violation, and
then assigns these establishments to be inspected first;
previously these assignments had been made
following a business and risk based approach, but it
still seemed somewhat random and inefficient.
However, due to a misunderstanding of one variable,
the first iteration of the model ended up being incorrect
and needed to be adjusted.
This failure ended up being a major learning point
for all involved stakeholders and emphasized the
importance of communication early on as well as the
importance of continuously communicating
throughout development.
3.3. Second Iteration
Though the first implementation was not
successful, it was improved upon and the
misunderstanding was addressed by the CDoIT and
CDoPH. This second attempt at the predictive
analytics model is open source and the code is freely
available on GitHub.
The model was tested over a two-month period
(September and October 2014), during this time
assignments were given out following normal
operations, but the model was running simultaneously
to see how it would compare to normal operations.
After the testing had been completed and validated, the
model was made operational by February of 2015.
When looking at the model, many different
predictive features were tested, but currently the
following nine predictors are utilized by the model:
1. “Establishments that had previous critical or
serious violations.
2. Three-day average high temperature.
3. Nearby garbage and sanitation complaints.
4. The type of facility being inspected.
5. Nearby burglaries.
6. Whether the establishment has a tobacco
license or has an incidental alcohol
consumption license.
7. Length of time since last inspection.
8. The length of time the establishment has been
9. And the assigned Inspector.” [12].
The new code also utilized different predictive
classification models, such as random forest, to try to
get better results.
The predictive food analytics model uses the
aforementioned predictors to classify which food
establishments are the most likely to have a critical
food violation. The individual in charge of assigning
food inspectors to establishments accesses the
predictions through a Shiny Application (Shiny is a
package in R that allows for easy development of web
pages and user interfaces). Food inspectors are then
assigned to establishments that have been predicted or
put forth by the model. The CDoIT GitHub page for
the predictive food analytics model put forth Figure 2
to demonstrate better how the model works. In
essence, food establishments with the highest risk of
critical violations are inspected first.
Figure 2: Optimized food inspection process [12]
3.4. Impact and Future
In 2014, the model was trained and evaluated over a
two-month period. The results from the model were
compared to the results of the actual food inspections
occurring at the same time. This comparison allowed
a clear advantage to be seen if the data-driven model
had been used instead of the traditional approach. The
model had allowed for critical food violations, on
average, to be found 7.5 days earlier [12]. Thus, this
would allow for potential food borne illness outbreaks
to be prevented, or have their severity limited, as the
violations responsible were being caught and
addressed earlier. However, the improvement of the
process did not stop here. As the second attempt was
open source, citizens and outside stakeholders have
also been able to get involved. The best example of
this as follows: one individual made a pull request on
Feb 3, 2017 demonstrating how the XGBoost model
was finding critical violations, on average, 7.79 days
earlier; this represented an improvement on the current
model in use. Four days later the Chief Data Officer of
Chicago had commented on it and a code review has
been initiated and stated, “If the results hold, we will
incorporate your contributions to the model that drives
food inspections in the city. Thank you and we will be
in touch soon” [24]. This provides a clear
demonstration of how outside stakeholders are able to
play a role in the co-creation of OGD-driven public
services. The model is still in use by the CDoPH today
and it is still actively maintained by the CDoIT, and
stakeholders are still able to suggest improvements to
the model through GitHub.
One result of the code being open is that it has been
possible for other stakeholders to take, adopt, and
change the code for their own uses. The best-known
example of this is that of Montgomery Country,
Maryland. Montgomery County had hired Open Data
Nation, a private sector data analytics company, to
take Chicago’s code and adopt it for Montgomery
County’s needs. However, this trial has been stopped
due to political reasons. This is an interesting fact, and
the reasons why the model is able to work in Chicago,
but not other areas, will be discussed in the next
section of this paper.
This case is truly interesting as it represents one of the
only examples that the authors’ found that seems to
represent a co-created OGD-driven public service. It is
also a service that is able to continue to provide value
moving into the future. As more data is generated, the
model is likely to become more accurate in predicting
critical violations. It will be interesting to follow up on
this case in the future to see how the co-creation of the
service progressed as well as how the accuracy
improves over time.
4. Discussion
This section will discuss what seems to be relevant,
and what not, in regards to the current understanding
of co-created OGD-driven public services. The
discussion aims to reflect back on the conceptual
model by comparing it to what emerged in the case.
Furthermore, propositions for co-created OGD-driven
public services will be put forth.
While conducting the interviews for this case,
stakeholders highlighted a multitude of factors that
allowed for the co-created OGD-driven public service
to be implemented; it was also stated by many
interviewees that the process seemed to be a “perfect
storm”. This “perfect storm” consisted of having
external funding, motivated stakeholders, innovative
leaders, proper communication channels, an existing
OGD portal, and developing the model in an agile way
that accepted the fact that mistakes would be made
throughout development.
External Funding
The City of Chicago had received a grant from
Bloomberg Philanthropies to develop a “SmartData”
portal. It was confirmed by the Chief Data Officer,
Tom Schenk, that external funding had allowed the
CDoIT to pursue actively more projects, such as the
project that this case focuses on. Though external
funding does appear to be an active driver for the co-
creation of OGD-driven public services, it should be
explored further to see what effect it has when a
government agency is not the main driving force
behind the services.
Motivated Stakeholders
The model was co-created by numerous different
stakeholders representing different sectors; Table 1
presents all stakeholders and their role in the project.
Role in Project
Co-Creator of model
Maintains open
government data portal
Service user
Allstate Insurance
Data Science Group
Co-Creator of model
Initial project manager
Organized Allstate CDoIT
Model improvements and
pull requests on GitHub
Table 1: Chicago predictive food analytics model
co-creators and their Roles. Source: Authors.
The model was developed in close cooperation
between the CDoIT, Allstate, and CDoPH; this
interaction was brokered by the CCA. Interestingly,
one interviewee stated that the role of a mediating
stakeholder, the CCA, seemed to be quite important.
In the interviewee’s experience, public and private
sector organizations sometimes clash due to
organizational differences, but the CCA was able to
work as a mediator and help build a bridge and develop
the relations between private and public sector. The
final group of stakeholders is that of the citizens. As
there has been citizen input that improved the
efficiency of the current model, it does appear that
there is interest and motivation to play a role in the co-
creation of OGD-driven public services.
Innovative Leaders
While conducting interviews, two names were
always stated as playing a critical role in the success
of the project; Tom Schenk (Chief Data Officer of
Chicago) and Gerrin Butler (Director of Food
Protection for the City of Chicago). Tom was said to
be the main driving force behind the model and had it
in mind for the code to be open source since the idea
was conceived. Gerrin was the actor who agreed to go
ahead with Tom’s plan for data-driven food analytics.
Gerrin did not initially understand what or how a data
analytics model would work and improve current
operations, but was willing to try and played an active
role throughout. Without the work and willingness of
leaders to push for and try new things, this case would
not have been possible.
Proper Communication Channels
There were two iterations of development for the
predictive model. The first one failed due to a
miscommunication between the CDoPH and Allstate
of how the process of food inspections worked. This
was noted down, and in the second iteration, there was
a strong emphasis on appropriate communication
between parties so that all could be understood. One
interesting part of this communication was how
technical and non-technical requirements and
terminologies were understood and translated by
different involved parties. On the CDoPH side, a list
or annex of technical terms was developed so that
technical conversations could be followed. On the
development side, the requirements were asked for
multiple times and a member of the CDoPH team who
had experience in data analytics was able to effectively
translate their current process into one more
understandable for the data analysts working on the
Existing OGD Portal
Chicago has had an OGD portal since 2010, but it
was improved greatly and made a legal requirement in
2012. It was stated by multiple interviewees that the
OGD portal allowed them to come up with the idea for
the new co-created OGD-driven public service, and
that there were no noticeable issues with data quality.
It is also important to note that a majority of the open
data sets that were used in the development of the
model were freely accessible to all on the OGD portal.
Thus, the OGD portal allowed a new service to be
thought up, and it could be created through the
exploitation of high quality and easily exploitable
OGD sources through Chicago’s OGD portal.
Agile Development
Though the service did not follow traditional agile
development methodology, some aspects of agile
development were present. The service was developed
and tested constantly, improvements were made and
tested throughout development, and if mistakes were
made, they were learned from and used to improve the
service quickly.
The Conceptual Model
It does appear that there is room for improvement
in the model that was proposed in section 2.1 based on
the aforementioned factors. One of the first things to
address is that it does appear that the model for co-
created OGD-driven public services varies depending
on the sector of the stakeholder(s) that are initiating
the service. In the case at hand, external funding was
one of the major drivers, whereas this may not
necessarily be true if a citizen or a company is taking
the lead in developing the service, but this should be
explored in further research. The role and importance
of communication and networks is not currently
highlighted in the framework, but from this case, it
does appear that communication and understanding
between different stakeholders has a large effect on
how well the co-creation of an OGD-driven public
service goes. However, the case also seems to validate
some aspects of the model. When looking at the case
it does appear to follow the co-initiation, co-design,
co-implementation, and co-evaluation cycle. The
model also proposed that OGD might act as a catalyst
to drive co-creation of OGD-driven public services;
this also seems to be supported by this case. The
Allstate team specifically chose the subject for this
case as there was OGD available, and this data was
easily accessible, exploitable, and of high quality.
1. In an environment where open government
data and tools for data analytics, exploitation
and co-creation are made widely available, any
actor can take the lead and initiate or co-create
data-driven services that create public value.
2. When OGD is released and maintained, it
allows the Government to act as a platform.
This platform allows OGD sets to be exploited
and leads to increased levels and occurrences
of co-created OGD-driven public services.
3. A “perfect storm” consisting of sufficient
resources, innovative leaders, motivated
stakeholders, and access to OGD allows for
effective execution of co-created OGD-driven
public services.
4. Co-created OGD-driven public services appear
to have the potential to drive increased levels
of efficiency traditionally slow or outdated
5. Government as a Platform appears to be a
bridge that allows for the concepts of co-
creation and OGD to be merged together. If the
government makes data available, and this data
is used to create a new public service, then at a
minimal level there will always be co-creation
between the government and the one exploiting
the data for the OGD-driven public services.
From the case, six different factors were highlighted
that seem to play an important role in the co-creation
of OGD-driven public services. After presenting these
factors they were used to reflect back on our current
understanding of co-created OGD-driven public
services and the model provided in section 2.1. After
this reflection 5 propositions have been proposed that
deal with how co-creation of OGD-driven public
services occurs, how the idea of GaaP leads to co-
creating of OGD-driven public services, as well as
what benefits a co-created OGD-driven public service
may have.
5. Conclusion
The aim of this paper was to present the case of
Chicago’s predictive food analytics model so that new
insights into the concept of co-created OGD-driven
public services could potentially emerge. In section
3.1, the context surrounding the case was presented,
specifically, the role that external funding, a
functioning OGD portal, and previous experience with
OGD-based predictive analytics. These three factors
seem to have played an instrumental role in laying the
foundation for the co-creation of OGD-driven public
services in the city of Chicago. Chicago appears to
have a ‘platform’ that is based on their OGD portal,
this government platform thus allows for the
exploitation and co-creation of new OGD-driven
public services.
On the private sector side, an interesting policy
was discovered. Allstate Insurance’s “bluelight”
policy allowed their employees to participate in pro
bono data science work, thus providing the
opportunity for their staff to engage in co-creation
with the City of Chicago. The role of Allstate also
seems to demonstrate that there is interest from those
with experience in data science to participate in pro
bono work and in the co-creation of new OGD-driven
public services.
Section 4 provided a discussion on the findings
from the case. Firstly, six factors were outlined as
playing a key role in allowing the co-creation of an
OGD-driven public service to take place: external
funding, motivated stakeholders, innovative leaders,
proper communication channels, an existing OGD
portal, and agile development practices. These factors
were then used to reflect back on one proposed
conceptual model for how the process of co-created
OGD-driven public services is understood. These
reflections allow for potential improvements to the
conceptual model to be made, but it also allows some
preliminary validation to take place of the model. It
does appear that the idea of co-created OGD-driven
public services has merit and does exist in the real
world. The way in which the service was developed in
Chicago also seems to match the four stages that were
proposed in the conceptual model. The final part of the
discussion was the presentation of some initial
propositions on co-created OGD-driven public
services. These propositions may be briefly
summarized as follows, availability of OGD and tools
for data analytics has the potential to enable the co-
creation of OGD-driven public services, governments
releasing OGD are acting as a platform and from this
platform the co-creation of new and innovative OGD-
driven public services may take place, and that the idea
of GaaP does appear to be an idea that allows for the
topics of co-creation and OGD to be merged together.
Though the case presented in this paper represents
an empirical example of a co-created OGD-driven
public service, it only represents one possible
combination of stakeholder roles as a governmental
agency was still playing a major role. As the notion of
a co-created OGD-driven public service implies that
the government need not play an active role in the
development, any examples of co-created OGD-
driven public services where a non-traditional
stakeholder is playing a leading role could provide
valuable insight into the formulation of the
understanding of co-created OGD-driven public
services. Secondly, this case study also only looks at
one type of a co-created OGD-driven public service (a
data analytics model), other types of services may
exist (such as web or mobile applications built on
OGD), and research should be further conducted on
the different types of co-created OGD-driven public
The exploratory case study that was conducted for
this paper provides an initial empirical case on a co-
created OGD-driven public service and aims to
advance and encourage research into the topic of co-
creation of OGD-driven public services. The case
demonstrates that there is a link between co-creation
and OGD, and that this link may enable or drive a
change in the current understanding of public services.
Furthermore, the case also demonstrates that there is a
relationship between GaaP and OGD and that this
relationship is likely to encourage or enable co-
creation. This paper provides an initial stepping-stone
on the topic of co-created OGD-driven public services
and, as such, proposes that future research into the
topic is needed. Potential avenues of future research
include solidifying the definition of a co-created
OGD-driven public service, empirical work focusing
on different types of co-created OGD-driven public
services, studies that aim to understand the role that
different stakeholders as the leading service developer
have on the co-creation process of OGD-driven public
services, and also how the idea of GaaP influences our
understanding of co-created OGD-driven public
services and the bridge between OGD and co-creation.
Acknowledgements. This work was supported by the
European Commission (OpenGovIntelligence H2020 grant
693849) and Estonian Research Council (PUT773,
6. References
[1] G. Galasso, G. Garbasso, G. Farina, T. Kalvet, F.
Mureddu, D. Osimo, and P. Waller, Analysis of the
value of new generation of eGovernment services
(SMART 2014/066). 2016.
[2] M. Janssen, Y. Charalabidis, and A. Zuiderwijk,
“Benefits, Adoption Barriers and Myths of Open
Data and Open Government,” Inf. Syst. Manag.,
vol. 29, no. 4, pp. 258268, 2012.
[3] A. Zuiderwijk, M. Janssen, S. Choenni, R. Meijer,
and R. S. Alibaks, “Sociotechnical Impediments
of Open Data,” Electron. J. Electron. Gov., vol.
10, no. 2, pp. 156172, 2012.
[4] M. Toots, K. McBride, T. Kalvet, and R.
Krimmer, “Open Data as Enabler of Public
Service Co-creation : Exploring the Drivers and
Barriers,” in Proceedings of the 2017
International Conference for E-Democracy and
Open Government (CeDEM 2017). IEEE
Computer Society, pp. 102-112, 2017.
[5] E. Ostrom, “Metropolitan reform: Propositions
derived from two traditions,” Soc. Sci. Q., pp.
474493, 1972.
[6] M. Toots, K. McBride, T. Kalvet, R. Krimmer, E.
Tambouris, E. Panopoulou, E. Kalampokis, and K.
Tarabanis, “A Framework for Data-Driven Public
Service Co-Production,” in Electronic
Government: Proceedings of the 16th IFIP WG 8.5
International Conference, EGOV 2017, M. Janssen
et al., Eds. Springer, pp. 264-275, 2017.
[7] European Commission, “A vision for public
services,” p. 16, 2013.
[8] K. McBride, “Government as a Platform:
Exploiting Open Government Data to Drive Public
Service Co-Creation,” Tallinn University of
Technology, 2017.
[9] D. Linders, “From e-government to we-
government: Defining a typology for citizen
coproduction in the age of social media,” Gov. Inf.
Q., vol. 29, no. 4, pp. 446454, 2012.
[10] V. Lember, “The role of new technologies in co-
production,” in Co-production and co-creation:
engaging citizens in public service delivery., T.
Brandsen, T. Steen, and B. Verschuere, Eds.
Routledge, forthcoming in 2018.
[11] M. Kassen, “A promising phenomenon of open
data: A case study of the Chicago open data
project,” Gov. Inf. Q., vol. 30, no. 4, pp. 508513,
[12] T. Schenk, “Food Inspection Forecasting - City of
Chicago.” [Online]. Available:
evaluation/. [Accessed: 21-Apr-2017].
[13] R. K. Yin, Case study research: Design and
methods, Rev. Newbury Park, Calif.: Sage
publications, 2013.
[14] Open Data Executive Order (No. 2012-2), City of
Chicago, 2012.
[15] “City of Chicago Developers,” City of Chicago
Open Data Developer Portal. [Online]. Available: [Accessed: 13-Jun-
[16] B. Goldstein, “Open Data in Chicago: Game On,”
Beyond Transparency, 2013. [Online]. Available:
1/open-data-in-chicago-game-on/. [Accessed: 13-
[17] “Bloomberg Philanthropies Announces Mayors
Challenge Winners Providence, Chicago, Houston,
Philadelphia, and Santa Monica,” Bloomberg
Philanthropies, 2013. [Online]. Available:
philadelphia-and-santa-monica/. [Accessed: 13-
[18] Ash Center Mayors Challenge Research Team,
“Chicago’s SmartData Platform - Pioneering Open
Source Municipal Analytics,” Data-Smart City
Solutions, 2014. [Online]. Available:
go-mayors-challenge-367. [Accessed: 13-Jun-
[19] S. Thornton, “Using Predictive Analytics to
Combat Rodents in Chicago | Data-Smart City
Solutions,” Data-Smart City Solutions, 2013.
[Online]. Available:
chicago-271. [Accessed: 13-Jun-2017].
[20] “Restaurant Inspection,” City of Chicago, 2017.
[Online]. Available:
tection_program.html. [Accessed: 13-Jun-2017].
[21] “Civic Consulting Alliance,” CCA, 2017. [Online].
Available: [Accessed:
[22] S. Thornton, “Delivering Faster Results with Food
Inspection Forecasting - Chicago’s Analytics-
Driven Plan to Prevent Foodborne Illness,” Data-
Smart City Solutions, 2015. [Online]. Available:
forecasting-631. [Accessed: 13-Jun-2017].
... One way that OGD may provide public value is by exploiting it and creating new and innovative services on top of it (Foulonneau et al. 2014a;Toots et al. 2017a;Khayyat and Bannister 2017;Foulonneau et al. 2014b). Due to widespread availability of OGD and data analytics tools/languages, such as R or Python, any stakeholder is able to begin to analyze OGD and/or build services that rely on or utilize OGD (Mcbride et al. 2018;Foulonneau et al. 2014a). This has drastic implications for the public service delivery process as, now, a stakeholder can find their own answers or create value on their own, rather than having to rely on a government agency to provide the answer or build a service that may or may not solve the stakeholder's initial problem, for an example of this, see (McBride et al. 2019 Ries, 2011) Though the use of OGD in the creation of new public services is an interesting area of study, in order for this phenomenon to occur at a broader level, a framework for understanding and analyzing the process is needed. ...
... The co-creation of an OGD-driven public service should be thought of as taking place within a system (see Mcbride et al. 2018;Dawes et al. 2016). The system is made up of the different agents (such as public sector organizations, citizens, etc.) that take part in the process of co-creation and of different environmental factors that support or create impediments to the functioning of the system. ...
... While the national portal has a large amount of data, many datasets go unused and it could be argued that the level of public value it aimed to create has not yet manifested. Meanwhile, in Chicago, there is an active civic hacking scene and new public value creating innovative applications are being created on a seemingly constant basis (see Mcbride et al. 2018;Kassen 2013). One of the primary reasons for this is familiarity with the data (Schrock and Shaffer 2017) and the relevance of the data to those who are exploiting it (Mcbride et al. 2018;Kassen 2013). ...
This chapter aims to demonstrate and understand how open government data can generate public value by allowing any actor to co-create an open government data-driven public service. The chapter takes a holistic approach to understanding open government data-driven co-creation and follows a content-context-process approach for the framework development. The framework proposes a public service co-creation cycle based around the ideas of agile and lean development that should lead to increased usage of open government data. The co-creation cycle is made up of four parts: co-initiation, co-design, co-implementation, and co-evaluation. To test the propositions put forth by the framework, a multi-case study was conducted on five different pilot projects that aimed to use open government data in the co-creation of new public services. The pilots were conducted at different levels of government and across different public domains. The results of the study seem to support the propositions outlined by the framework, though it also emerged that the pilots that engaged in co-implementation had higher levels of user engagement and satisfaction with the service; this warrants future empirical research.
... Estudos apontam que há poucos esforços relatados na alavancagem do uso de dados sociais para a opinião inteligente do governo, na efetiva utilização de dados e informações advindos dos cidadãos, nas interações dinâmicas entre as partes interessadas e na influência e no desenvolvimento de políticas públicas (Bernardes, Andrade, Novais, & Lopes, 2017;McBride, Aavik, Kalvet, & Krimmer, 2018;Przeybilovicz, Cunha, Macaya, & Alburquerque, 2018). ...
... Capital Humano -desenvolver a capacidade analítica para que os servidores possam avançar para a tomada de decisão baseada em dados; contratar ou desenvolver cientistas de dados para atuar no governo; realizar pesquisas no campo da gestão do conhecimento, uma vez que a administração pública possui os dados, mas não faz uso suficientemente eficiente deles (Bojovic, Klipa, Secerov, & Senk, 2017;Malomo & Sena, 2017;Smith, 2008;Valle-Cruz & Sandoval-Almazan, 2018); Engajamento Social -instituir processos de cocriação entre governo e sociedade; implantar políticas de dados abertos e mecanismos de interação com o setor empresarial e outros atores sociais. Há também uma lacuna relacionada com esforços para a alavancagem de dados sociais, a geração de opinião inteligente no governo e o desenvolvimento da interação dinâmica entre as partes interessadas em novas políticas públicas (Algebri, Husin, Abdulhussin, & Yaakob, 2017;Bernardes et al., 2017;Calof, 2017;Hidayat & Kurniawan;2017;Kumar & Sharma, 2017;Li & Liao, 2018;McBride et al., 2018;Przeybilovicz et al., 2018). ...
Full-text available
Resumo Estudos recentes apontam que as barreiras para a transição e estruturação de um governo inteligente parecem menos tecnológicas e mais institucionais. Nesse intuito, este artigo fornece uma contribuição original ainda não abordada na literatura, com o objetivo de analisar as dimensões de inteligência na gestão pública sob a lente da teoria institucional e, por meio do debate teórico, desenvolver um modelo de institucionalização de inteligência na gestão pública. Para fins de validação das quatro categorias definidas segundo a análise teórica (estrutura organizacional, estrutura tecnológica, capital humano e engajamento social), com as respectivas dimensões de inteligência (uso de dados e informações externas; cultura organizacional para inteligência; uso efetivo de tecnologias [Big Data; Business Intelligence]; decisão com base em evidências; colaboração interdepartamental e interorganizacional; organização e unificação de base de dados; agilidade em governo; eficiência e efetividade da gestão; engajamento social; inovação, cocriação, inteligência coletiva), optou-se pela utilização da técnica de card sorting. Os resultados apontam para a importância da incorporação dos elementos da perspectiva institucional para a legitimação de inteligência no governo. Ainda, com base na análise da etapa de card sorting, os resultados demonstram concordância na classificação dos itens por construto proposto, apresentando-se como uma oportunidade futura do modelo a ser testado quantitativamente.
... Studies identify few efforts reported in leveraging the use of social data to subsidize an intelligent opinion of the government, in the effective use of data and information from citizens, in the dynamic interactions between stakeholders, and in the influence and development of public policies (Bernardes, Andrade, Novais, & Lopes, 2017;McBride, Aavik, Kalvet, & Krimmer, 2018;Przeybilovicz, Cunha, Macaya, & Alburquerque, 2018). ...
... Organizational Structure -redesigning the structure and considering the technical implications of transitioning to a smarter government, in which information is centralized through organizational and management mechanisms (Halaweh, 2018;Salvador & Ramió, 2020;Vieira & Alvaro, 2018;WeiWei & WeiDong, 2015); Technological Structure -analyzing the practices and real effects of data and information technology and how electronic platforms collaborate to develop and legitimate the activity of intelligence in governments Santos, 2018); Human Capital -developing analytical capacity so that employees can move toward data-based decision-making; hire or develop data scientists to serve in government; carry out research in the field of knowledge management since the government has the data but fails to use it efficiently (Bojovic, Klipa, Secerov, & Senk, 2017;Malomo & Sena, 2017;Smith, 2008;Valle-Cruz & Sandoval-Almazan, 2018); Social Engagement -refers to establishing co-creation processes gathering government authorities and civil society; implementing open data policies and mechanisms for interaction with the business sector and other social actors. There are insufficient efforts to leverage social data, generate smart opinion in government, and develop dynamic interaction among stakeholders in new public policies (Algebri, Husin, Abdulhussin, & Yaakob, 2017;Bernardes et al.., 2017;Calof, 2017;Hidayat & Kurniawan;2017;Kumar & Sharma, 2017;Li & Liao, 2018;McBride et al., 2018;Przeybilovicz et al., 2018). ...
Full-text available
Recent studies point out that the barriers to transition and structuring a smart government seem less technological and more institutional. Against this backdrop, this article provides an original contribution to the literature by analyzing the dimensions of intelligence in public management under the lens of institutional theory. Also, from the theoretical debate, the research develops a model of institutionalization of intelligence in public management. The card sorting technique was used to validate the four categories defined from the theoretical analysis (organizational structure, technological structure, human capital, and social engagement). These categories were defined considering the respective dimensions of intelligence: use of data and external information; organizational culture for intelligence; effective use of technologies (Big Data; Business Intelligence); evidence-based decision-making; inter-departmental and inter-organizational collaboration; database organization and unification; government agility; management efficiency and effectiveness; social engagement; innovation, co-creation, intelligence collective. The results point to the importance of incorporating elements from the institutional perspective to legitimize intelligence in government. Also, from the analysis of the card sorting stage, the results demonstrate agreement in classifying items by proposed construct, presenting itself as a future opportunity for the model to be quantitatively tested.
We need new governance solutions to help us improve public policies and services, solve complex societal problems, strengthen social communities and reinvigorate democracy. By changing how government engages with citizens and stakeholders, co-creation provides an attractive and feasible approach to governance that goes beyond the triptych of public bureaucracy, private markets and self-organized communities. Inspired by the successful use of co-creation for product and service design, this book outlines a broad vision of co-creation as a strategy of public governance. Through the construction of platforms and arenas to facilitate co-creation, this strategy can empower local communities, enhance broad-based participation, mobilize societal resources and spur public innovation while building ownership for bold solutions to pressing problems and challenges. The book details how to use co-creation to achieve goals. This exciting and innovative study combines theoretical argument with illustrative empirical examples, visionary thinking and practical recommendations.
Conference Paper
Full-text available
A atividade de inteligência na gestão pública tem ganho cada vez mais importância tanto no meio acadêmico como organizacional. Estudos recentes sobre a temática apontam que as barreiras para transição e estruturação de um governo inteligente parecem menos tecnológicas e mais institucionais. Neste intuito, este ensaio teórico fornece uma contribuição original ainda não abordada na literatura, ao analisar as dimensões de inteligência na gestão pública sob a lente da Teoria Institucional. A partir do debate teórico, buscou-se o desenvolvimento de um modelo de institucionalização da atividade de inteligência na gestão pública. Os resultados apontam para a importância da incorporação dos elementos da perspectiva institucional para a legitimação da atividade de inteligência em governo e fornece um modelo a ser testado e validado pelo meio acadêmico.
Full-text available
Contexto: nos últimos anos, estudos buscaram analisar de que forma os processos de inteligência e de gestão do conhecimento são compreendidos e aplicados no contexto da gestão pública, ambiente em que esses processos aparecem como um ponto a ser explorado para potencializar a qualidade decisória. Objetivo: analisar como os gestores públicos aplicam inteligência e gestão do conhecimento visando a uma maior qualidade decisória. Método: a partir de protocolo de pesquisa definido e validado, foram realizadas entrevistas com dezessete gestores públicos do sul do Brasil. Para a análise, foi aplicada a técnica de análise comparativa qualitativa utilizando conjuntos fuzzy para identificar caminhos efetivos para tomada de decisão em Governo. Resultados: os resultados indicam a importância da efetiva gestão de dados, informações e conhecimentos para qualidade decisória de gestores públicos, demonstrando que a pouca qualidade decisória está relacionada à ausência ou à reduzida utilização de gestão do conhecimento e inteligência na gestão pública. Conclusão: além de analisar condições e propor caminhos para levar a uma maior qualidade na tomada de decisão dos gestores públicos, foi possível contribuir para a temática de gestão do conhecimento e inteligência na gestão pública, bem como beneficiar o governo com caminhos a serem consolidados e melhor explorados.
Purpose Recent technological advances have enabled consumers and citizens to contribute to organizational processes through co-production and co-creation in ways that challenge traditional co-production. However, the practices and capabilities for value co-creation are less understood, particularly in an increasingly networked social government ecosystem. The purpose of this research is to examine the enablement of new digital co-production practices in social media platforms (SMPs) and theorize SMP-enabled digital co-production vis-à-vis traditional co-production for public sector. Design/methodology/approach Primarily using principles of interpretivist approaches, a qualitative content analysis of communication practices (i.e. genres) observed within Australian government Facebook pages was carried out to examine the salient digital forms of co-production practices. Findings SMPs enable new practices in digital co-production for public sector (information dissemination, Q&A, feedback and co-creation), ranging from lower to higher intensity in terms of resource integration, scale of contributions, engagement and extent of relationship vis-à-vis traditional co-production. Research limitations/implications This research is bounded by its geographical emphasis on Australian Federal government. Hence, the results may not be readily transferable to other contexts. Practical implications Our framework offers an array of choices for digital co-production strategies to suit agency's focus and goals for engagement in the Facebook Pages. As agencies progress to reach higher intensity co-production, public engagement and impact increases. Originality/value The paper contributes to co-production in social government ecosystem by increasing the theoretical and practical understanding of new form of SMP-enabled digital co-production defined as “small-scale, repetitive, user-driven co-production that is flexible, durable, ad-hoc, and sporadic, where many hands make light work”. The proposed “co-production to co-creation” framework provides valuable guideline for enhancing public service provision via SMPs.
Full-text available
Context: in recent years, studies have sought to analyze how intelligence and knowledge management processes are understood and applied in the context of public management, environments in which processes appear as a point to be explored to enhance decision-making quality. Objective: to analyze how public managers apply intelligence and knowledge management aiming at a higher decision quality. Method: based on a defined and validated research protocol, interviews were conducted with seventeen public managers in southern Brazil. For the analysis, the qualitative comparative analysis technique using fuzzy sets was applied. Results: the results suggest the importance of effective data, information, and knowledge management for the decision-making quality of public managers, demonstrating that the absence of decision-making quality is directly related to the absence or little use of knowledge management and intelligence elements in the public management. Conclusion: in addition to analyzing conditions and proposing ways to lead to greater quality in decision making by public managers, it was possible to contribute to the theme of knowledge management and intelligence in public management, as well as to benefit the government with paths to be consolidated and better explored.
Cities are becoming smart environments with the use of information and communication technologies (ICT). Data from these technologies are stored by various devices spread throughout the city and are available in open data portals, which can be used to improve essential services such as public transport and fed into platforms for visualization and analyses. Human and urban mobility analyses demonstrate that understanding movement patterns can assist governments in city’s decision-making process, as well as improve life quality of citizens. Aiming to enable mobility analysis in different cities, this work presents MODAL platform. This platform replicates mobility analyses and algorithms on databases of different cities using data obtained from open data portals. We assess the platform with a case study performing analyses of the transportation displacement within three different cities using complex network metrics. The results demonstrated the public transportation system efficiency showing regions of Chicago, Dubai and Taichung well served and regions which are key points to the transportation city interconnecting various areas. Moreover, we could evaluate how improved the transportation system would be by adding new lines or new transport system. The analyses demonstrated the platform potential to be used as support decision system for governments, showing the possibility of applying open data to improve city services and facilitate the conduction of analyses on various cities.
Full-text available
Co-production and co-creation occur when citizens participate actively in delivering and designing the services they receive. It has come increasingly onto the agenda of policymakers, as interest in citizen participation has more generally soared. Expectations are high and it is regarded as a possible solution to the public sector’s decreased legitimacy and dwindling resources, by accessing more of society’s capacities. In addition, it is seen as part of a more general drive to reinvigorate voluntary participation and strengthen social cohesion in an increasingly fragmented and individualized society. “Co-Production and Co-Creation: Engaging Citizens in Public Services” offers a systematic and comprehensive theoretical and empirical examination of the concepts of co-production and co-creation and their application in practice. It shows the latest state of knowledge on the topic and will be of interest both to students at an advanced level, academics and reflective practitioners. It addresses the topics with regard to co-production and co-creation and will be of interest to researchers, academics, policymakers, and students in the fields of public administration, business administration, economics, political science, public management, political science service management, sociology and voluntary sector studies.
Conference Paper
Full-text available
Governments are creating and maintaining increasing amounts of data, and, recently, releasing data as open government data. As the amount of data available increases, so too should the exploitation of this data. However, this potential currently seems to be unexploited. Since exploiting open government data has the potential to create new public value, the absence of this exploitation is something that should be explored. It is therefore timely to investigate how the potential of existing datasets could be unleashed to provide services that create public value. For this purpose, we conducted a literature study and an empirical survey of the relevant drivers, barriers and gaps. Based on the results, we propose a framework that addresses some of the key challenges and puts forward an agile co-production process to support effective data-driven service creation. The proposed framework incorporates elements from agile development, lean startups, co-creation, and open government data literature and aims to increase our understanding on how open government data may be able to drive public service co-creation.
Conference Paper
Full-text available
Open data is being increasingly looked at as a major driver of public service innovation. Open access to datasets and advanced analytical tools are believed to generate valuable new knowledge that can be turned into data-driven services. At the same time, open data is also believed to spur open governance and enable the engagement of various stakeholders in the co-creation of services. Despite this appealing vision of open data-driven co-creation of public services, we are far from understanding how it can be realized in practice. We turned to 63 experts and practitioners in a survey covering six European countries and found a multitude of barriers that need to be overcome first. Luckily we also found some drivers. This paper provides some first insights into these drivers and barriers and proposes policy recommendations to foster a data-driven transformation of public service creation.
Full-text available
New technologies are changing our current understanding of public services. One example of this is the emerging concept, and exploitation of, Open Government Data (OGD). Governments are able to release OGD and, through this action, act as a platform. This Open Government Data Platform allows anyone to use, exploit, and analyze government datasets to co-create new and innovative services which provide public value and empower communities in combination with multiple stakeholders. The aim of this thesis is to explore this new phenomenon and attempt to gain a better understanding of the process in which stakeholders are able to use OGD to co-create these new public services. An exploratory case study is conducted on an ongoing pilot project within Estonia which is co-creating a new public service based on OGD. The case seems to show that in order for OGD driven public service co-creation to occur effectively a new understanding of the role of stakeholders is needed, and that when governments release OGD and act as a platform they inherently become involved in the co-creation of new public services, even if this is not the goal. As a result of the research a general architecture for a co-creation OGD driven public service web application is also derived and presented.
Full-text available
This article presents a case study of the open data project in the Chicago area. The main purpose of the research is to explore empowering potential of an open data phenomenon at the local level as a platform useful for promotion of civic engagement projects and provide a framework for future research and hypothesis testing. Today the main challenge in realization of any e-government projects is a traditional top–down administrative mechanism of their realization itself practically without any input from members of the civil society. In this respect, the author of the article argues that the open data concept realized at the local level may provide a real platform for promotion of proactive civic engagement. By harnessing collective wisdom of the local communities, their knowledge and visions of the local challenges, governments could react and meet citizens' needs in a more productive and cost-efficient manner. Open data-driven projects that focused on visualization of environmental issues, mapping of utility management, evaluating of political lobbying, social benefits, closing digital divide, etc. are only some examples of such perspectives. These projects are perhaps harbingers of a new political reality where interactions among citizens at the local level will play a more important role than communication between civil society and government due to the empowering potential of the open data concept.
In this article, based on data collected through interviews and a workshop, the benefits and adoption barriers for open data have been derived. The results suggest that a conceptually simplistic view is often adopted with regard to open data, which automatically correlates the publicizing of data with use and benefits. Also, five “myths” concerning open data are presented, which place the expectations within a realistic perspective. Further, the recommendation is provided that such projects should take a user's view.
There is an increasing demand for opening data provided by public and private organisations. Various organisations have already started to publish their data and potentially there are many benefits to gain. However, realising the intended positive effects and creating value from using open data on a large scale is easier said than done. Opening and using data encounters numerous impediments which can have both a socio and a technical nature. Yet, no overview of impediments is available from the perspective of the open data user. Socio-technical impediments for the use of open data were identified based on a literature overview, four workshops and six interviews. An analysis of these 118 impediments shows that open data policies provide scant attention to the user perspective, whereas users are the ones generating value from open data. The impediments that the open data process currently encounters were analysed and categorized in ten categories: 1) availability and access, 2) find ability, 3) usability, 4) understand ability, 5) quality, 6) linking and combining data, 7) comparability and compatibility, 8) metadata, 9) interaction with the data provider, and 10) opening and uploading. The impediments found in literature differ from impediments that were found in empirical research. Our overview of impediments derived from both literature and empirical research is therefore more comprehensive than what was already available. The comprehensive overview of impediments can be used as a basis for improving the open data process, and can be extended in further research. This will result in the solving of some impediments and new impediments might rise over time.
One purpose of this essay is to attempt to isolate the theoretical sturcture implicit in the traditional metropolitan reform movement so that empirical research can be organized to examine the warrantability of the propositions contained therein; a second is to pose an alternative theoretical structure derived from work of political economists. (Author)
Available: [Accessed: 21-Apr-2017]. [13] R. K. Yin, Case study research: Design and methods, Rev City of Chicago Developers City of Chicago Open Data Developer Portal
  • T Schenk
T. Schenk, " Food Inspection Forecasting-City of Chicago. " [Online]. Available: [Accessed: 21-Apr-2017]. [13] R. K. Yin, Case study research: Design and methods, Rev. Newbury Park, Calif.: Sage publications, 2013. [14] Open Data Executive Order (No. 2012-2), City of Chicago, 2012. [15] " City of Chicago Developers, " City of Chicago Open Data Developer Portal. [Online]. Available: [Accessed: 13-Jun2017]. [16]