PreprintPDF Available

Artificial Intelligence: A Tale of Social Responsibility

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Conversely to the legislation that struggles to develop, regulate and supervise the use of artificial intelligence (AI), the civil society, that gradually realizes the fundamental issues and perspectives induced by this new technology, slowly starts to take responsibility and to mobilize. Social responsibility expresses itself through the emergence of new voluntary standards, that could integrate the concept of social good with the use of AI. More precisely, this paper proposes to develop three axes of tools for the social responsibility in AI, including stakeholder awareness, the integration of ethical and technical standards to induce good behaviors, and the incitement to a responsible AI.
Content may be subject to copyright.
Artificial Intelligence: A Tale of Social Responsibility
Written by Cecilia Darnault,1Titouan Parcollet,2Mohamed Morchid2
1Avignon (France)
2LIA, Avignon University, (France)
cecilia.darnault@hotmail.fr
Abstract
Conversely to the legislation that struggles to develop, reg-
ulate and supervise the use of Artificial Intelligence (AI),
the civil society, that gradually realizes the fundamental is-
sues and perspectives induced by this new technology, slowly
starts to take responsibility and to mobilize. Social responsi-
bility expresses itself through the emergence of new volun-
tary standards that could integrate the concept of social good
with the use of AI. More precisely, this paper proposes to de-
velop three axes of tools for the social responsibility in AI, in-
cluding stakeholder awareness, the integration of ethical and
technical standards to induce good behaviors, and the incite-
ment to a responsible AI.
Introduction
Novel technologies and especially Artificial Intelli-
gence (AI) paradigms induce a massive interest from
researchers across different domains, and raise ques-
tions and concerns. Indeed, some “worries about
the transformations and the possible destruction that
could put in danger our world” (Ganascia 2017;
Krishnan 2016), while “others, convinced of the in-
evitability of the upheavals to come, seek to influence the
movement to make this future livable” (Ganascia 2017). In
fact, AI ”might be the most important transition of the next
century either ushering in an unprecedented era of wealth
and progress or heralding disaster” (Wiblin 2017). The
scientific community does not remain indifferent and warns
AI-based systems users against the risks related to AI mod-
els and the manner that the models have been learned. For
about ten years, experts warned that ”if research continues
to advance without enough work going into the research
problem of controlling such machines, catastrophic acci-
dents are much more likely to occur” (Wiblin 2017). These
concerns are shared by the civil society (UNESCO 2018;
Floridi et al. 2018) that has a certain mistrust of AI, but
they are also difficult to develop and formalize due to the
technical complexity of the technology.
Copyright c
2019, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
In spite of few recent specialized applicable standards,
such as for the protection of personal data (Heisenberg 2005;
Albrecht 2016) or in robotics (Schlenoff et al. 2012; John-
son and Noorman 2014), a general and global legal system
control the AI related risks does not exist. Fortunately na-
ture abhors a vacuum and solutions have been developed. At
the edge of lawlessness and mandatory law, a whole range
of applicable standards more or less restrictive are emerg-
ing to fill possible legal loopholes, often out of necessity,
as prior regulatory basis or to overcome political and eco-
nomical issues at a national or international scale (Guzman
and Meyer 2010; Pellet 2018). These private standards are
not meant to replace the necessary traditional legal norms,
but reflect a need for regulation. Due to the development
of private initiatives, and with non-binding legal tools also
known as soft law (Dupuy 1990; Cazala 2011) that are ac-
tively used in other legal domains, such as environmental
law, the civil society is mobilizing to guarantee the devel-
opment of AI for social good. This mobilization reflects a
form of empowerment of the society in the prevention of
AI-related risks. The term responsibility defines the princi-
ple of facing the consequences of our actions. However, the
responsibility is expressed in different ways such as a coer-
cive normative constraint, an economic mechanism, a moral
imperative or a governance mechanism (Dahlsrud 2008;
Costa et al. 2001). Consequently, a social accountability
emerges and leads to the deployment of voluntary standards,
coming from the civil society to guarantee a suitable princi-
ple of social good in addition or as a precursor to the tradi-
tional normative framework.
This paper explores and defines how voluntary stan-
dards enable the use of AI for social good with the in-
troduction of different tools that have been successfully em-
ployed jointly with legal regulations in other legal areas such
as the environmental law. Therefore, the major forms of vol-
untary standards applicable to AI ensuring its positive use
according to the social good are gradually introduced, start-
ing with the stakeholder awareness, followed with the vol-
untary application of private standards, and concluding with
the incitement to the development of AI for social good.
Awareness on social good for AI
The actors empowerment and the integration of social good
for AI start with the democratization of the information
related to both AI risks for social good, and to the awareness
of the concepts of social good for AI.
Risk awareness of AI for social good. The fast devel-
opment of AI leads to concrete risks to the social good
in different aspects of our societies. Different examples
are available such as in the economic field, with the
strong impact of AI on robotics and the employment
market (Manyika 2017), or in the social area with the
unfortunate transfer of important human biases to the
systems (e.g. racism, sexism,etc.) (Bolukbasi et al. 2016;
Zhao et al. 2017; 2018), or within the security domain
with concerns involving the relations of AI to the concept
of democracy (Ferrara et al. 2016), arms monitoring or
in a legal aspect, with multiple questions on the pro-
tection of personal data and privacy (Albrecht 2016;
Tankard 2016), or on the problem of account-
ability of AI systems (Taddeo and Floridi 2018;
Tual and Fagot 2018). Researchers have drawn up a
non-exhaustive list that includes “labor displacement, in-
equality, an overall oligopolistic structure, totalitarianism,
shifts and volatility in national power, strategic instability,
and an AI race that sacrifice safety and other values
(Dafoe 2017). Raising the awareness of AI risks is of crucial
interest to ensure its implementation with respect to the
standards of social good.
Without a normative framework, and in the same manner
as the recent mobilization of numerous employees against
a partnership involving their AI products and the army
(Harwell 2018), some researchers become aware of these
risks, and take responsibility and fight to establish safe-
guards for a responsible development of AI (Montr´
eal 2018;
Harwell 2018). A growing awareness expressed in the eco-
nomic sphere by the recent creation of ethical committees
from private companies, such as the advanced technology
external advisory council (Levin 2019) or the foundation
of an ethical institute (Shead 2019) aiming at conducting
independent research increasing awareness by providing
advice on the subject of AI for social good. Moreover, and
at the country level, some governments realize the lack of
standards and the increasing risk of AI, and start to build
national study committees. As an example, France and
Canada have recently announced the joint creation of the
international panel on artificial intelligence (IPAI 2018).
These mechanisms are expected to promote a respectful
and ethical approach of AI with the principles of sustain-
able development. Unfortunately, some of them already
suffer from internal biases on the subject (Levin 2019)
highlighting the complexity of the task. Concurrently to
these private initiatives, numerous open events promoting
a risk awareness on AI for social good emerge, including
well-known international conferences and workshops
broadening the debate of stakeholders on this topic (e.g.
NeurIPS AISG Workshop, Conference on AI, Ethics, and
Society, International Conference on Artificial Intelligence
and Law). All of these tools are essential to raise awareness
of the risks of artificial intelligence. Despite reflecting a
beginning of interest, additional efforts must be made to
improve their impact.
Raising the AI actors awareness to the social good.
Well-known researchers on social sciences and on the man-
agement theory I. Nonaka and H. Takeuchi have proposed
an interesting approach particularly suited for these issues
called the rugby metaphor. The latter idea highlights the
richness of the information that can be cultural, moral, emo-
tional or technical as well as the importance of the selection
and transfer processes of the information by the actors of a
specific situation (Bellon 2002). This work demonstrates the
importance of the information and of its circulation among
the various stakeholders, but most importantly, it shows the
need to mobilize relevant and meaningful information at the
right moment. Based on these hypothesis, the use of AI for
social good necessarily involves both formal and informal
education and training to allow a transfer of the relevant
information to raise the awareness of the different actors.
In the same manner as computer science students are of-
ten introduced to the economics and management sciences,
AI actors could and have to be initiated to fundamentals
of social good (e.g. basic rights, social sciences, risk
management,...) to integrate them whether consciously or
unconsciously to their research or processes of production.
Conversely, a dissemination of non-scientific or basic
knowledge on methods related to AI to the civil society is
crucial and must be encouraged to simplify the integration
of the AI-related concepts and their impact on our soci-
eties. As an example, it is feasible to apply existing tools,
including advertising campaigns or public seminars, to
leverage a minimal but global knowledge on AI. Then and
as a result of Corporate Social Responsibility (CSR), some
organizations have set up an entire legal culture that could
be transferred to AI to develop a collective intelligence of
their teams regarding AI. A very simple, but powerful tool,
is to create and broadcast informative e-learning videos.
Indeed, it represents a training tool among others that raises
awareness by focusing on the actions and reflexes to adopt,
especially on the identification of a potential risk.
Awareness of artificial intelligence actors allows them to
become conscious of the impact of their actions and their de-
cisions in the development of risks related to new technolo-
gies. Once aware, they can act to limit these risks facilitating
the advancement of an artificial intelligence for social good.
Despite the fact that this paper proposes to study soft law
techniques related to AI for social good, an effective and
complete awareness of all actors in this field requires that
our leaders and representatives are also interested. In fact,
soft law techniques must be complementary to the adoption
of public policies for the supervision of AI. Therefore, it is
necessary to bring the attention of politicians to the many
risks of AI and the crucial interest of its management.
Private standardization of social good for AI
Once the different AI actors are well-aware of the concept
of social good, various instruments must be proposed to
formalize its integration and induce behaviors that are
respectful of the idea of social good within the use of AI,
such as the development of voluntary standards from private
initiatives.
Ethical standards for social good in AI. As stated by
the Nobel Prize for Economics Joseph Stiglitz, “ we are a
global community, and like all communities, we must follow
rules to live together. They must be clearly seen as fair and
just. They must pay due attention to the poor as well as
the powerful, and demonstrate a deep sense of honesty and
social justice ” (Ferry-Maccario et al. 2006) and regardless
of the considered domain. In this extent, AI constitutes a
major scientific and technological breakthrough that brings
important social benefits, but also rises critical ethical and
social risks (Montr´
eal 2018). Ethical challenges that the
community, and in the absence of a legal framework in this
area, has took off. Indeed, among the tools of voluntary
standardization to enhance the concept of social good
within the AI domain, some ethical standards have recently
emerged while others are still in development.
As an example, the Montr´
eal declaration for a responsi-
ble development of artificial intelligence (Montr´
eal 2018)
is advocating for a positive development and use of AI
based on specifics social good principles. Following the
latter declaration, others institutions such as the Council
of Europe are interested in extending these principles to
a general ethical framework for a responsible and ethical
use of AI with respect to the Human Rights, rule of law,
and democracy. More precisely, the Council of Europe
investigates the impact of AI in specific domains, including
the medical field (Bioethics Committee) (Yuste et al.
2017), with a strategic action plan on the interactions
of technologies and Human Rights in the biomedical
context, or gender equality (Gender Equality Committee),
with a funded project to prevent and reduce the risk of
sexism induced by AI algorithms (Zhao et al. 2018; 2017;
Bolukbasi et al. 2016). In fact, numerous areas with an
emphasize on AI are investigated, including education,
discrimination, cyber-security,..., Despite various ongoing
projects on ethical standards that constitute a significant
progress toward the supervision of AI for social good,
coordination and the practical application of these standards
remain an open and crucial problem. Indeed, all these
institutions could benefit from coordinating their ethical
standards. More precisely, a global coordination would
enable an international ethical framework for AI, and
would make the effectiveness of its application easier.
Furthermore, and due to the abstract concepts driven by
the idea of an ethical AI, an effective implementation
of these texts implies to improve them with more global
actions. As an example, we propose to conceive specific
ethical “codes of conduct ” for professionals, to deploy
in private companies or in public institutions. Ethics is of
particular interest to the artificial intelligence community.
Nevertheless, the research in this field mainly comes from
western-based societies. Thus, AI for international social
good must envision common values and its principles must
come from around the world.
Technical standards for social good in AI. In the last
decades, the concept of social responsibility, that relies on
the voluntary initiative, has been highly investigated in the
field of corporate social responsibility for the environmental
protection and Human rights (Hay, Stavins, and Vietor 2005;
Idowu et al. 2013; Darnault 2018). The social responsibility
answers a global need for references (e.g. standards, laws),
enabling an institution or a company (e.g. AI-related
products) to fully integrates its economical, social and legal
environments alongside with its different stakeholders,
within the management of the institution or company. The
standardization is a crucial element of the soft law, that
represents common standards to illustrate and standardize
the practices of the social responsibility, and ensures
a certain efficacy by proposing a collective solution to
technical or organizational issues (Castka et al. 2004;
Helfrich 2008). The private standardization based on soft
law techniques offers the advantage of tailored voluntary
commitments but does not replace legal regulations.
Various national, regional or international organizations
are at the origin of these voluntary standards designed
specifically for the audiences they target to build a common
frame of reference for AI. In fact, and in the same manner
as W3C that has standardized the compatibility of certain
web-based technologies, few organizations start to embrace
the concept of AI and its supervision for social good. As an
example, the International Standardization Organization
(ISO) is currently working toward new strong standards
for AI in collaboration with the United Nations (UN) Sus-
tainable Development Goals (SDG). Despite the fact that
recently published standards mostly consider information
technologies (IT) and the reference data architectures,
numerous ongoing or upcoming projects focuse on the use
of AI, such as the standard ISO/IEC AWI 38507 that relates
to the governance of IT and the governance impacts of the
use of AI technologies (ISO 2019). In the same context,
a French standardization association, named AFNOR, ac-
tively works on important aspects of the use of AI including
ethical and social concerns, reliability and the integra-
tion of the risks induced by systems relying on machine
learning based solutions (AFNOR 2019). These voluntary
standards are based on consensus among all actors and in
this sense they all agree: economic actors and consumers,
professionals and users, to clarify and harmonize practices
and define the quality level, security, compatibility, impact
of products, services and practices. While they contribute
to the establishment of minimum quality thresholds, the
achievement of compatibility between organizations and
the reduction of organizational variability, these standards
are usually purely functional and must complement legal
regulations that protect the common good.
Incitement to social good for AI
Based on the previous introduction to social behaviors of
social good standards for the use of AI, it is now feasible
and important to actively promote their development toward
the community and to induce a generalization of social good
in AI.
Contractual commitment to social good for AI.
While most of the contractual relations are limited to
the principal intentions of the parties and solely con-
sider their main duties (Nurit-Pontier and Rousseau
2012), it is worth emphasizing that a contract not
only establishes a commercial relationship but also
enables to anticipate foreseeable risks (Deharo 2011;
Posner 2002). More precisely, it represents a legal instru-
ment that defines a powerful communication tool, that
can be used to determine contractual partners engaged
in responsible processes or to incite them to develop
this aspect. It is therefore of crucial interest to integrate
informational components in the contracting process of
the formation of the contract, to motivate and lead to the
partners agreement to the concept of social good for the
use of AI, as it has already been demonstrated for climate
change (Hautereau-Boutonnet and Porchy-Simon 2019).
For example, voluntary environmental standards have
gradually shaped international trade rules by infiltrating the
contractual practice (Howard, Nash, and Ehrenfeld 2000;
Delmas and Terlaak 2001; Johansson and Lidestav 2011).
Thus, the use of the contractual tool, in particular the
environmental clauses mentioned in the ethical charters
or stipulated in the international contracts, formalizes the
commitment of certain companies in the fight against the
environmental risks. A method used in environmental
matters that can be transposed, combined with the deploy-
ment of voluntary standards, to fight against the risks of
AI. Then, and with both public and private institutions,
it is feasible to internally integrate the observance of
ethical standards in employment contracts. In fact, among
the sources of applicable labor law, it is necessary to
distinguish external sources (e.g. international rules, laws,
collective agreements) from internal sources (e.g. collective
agreements of companies, regulations, uses ...). Thus, the
employer can develop rules, standards of ethical behavior
that are imposed on workers (James 2000). Externally,
ones can consider to obtain the adherence of institutional,
commercial or financial partners to the defined ethical
standards, to ensure the integration of technical standards
enabling a responsible development of AI within the partner
organization. In fact, in the same manner that employers can
induce the ethical behavior of its employees by contract, an
organization can influence the behavior of its partners by
their contractual commercial relations. The freedom that
frames the contractual tools makes it possible to consider
adding contractual clauses regulating the use of a research
object, or a product, to limit restrictively the utilization and
the resale of an AI technology compliantly to social good
principles.
Ensure the use of AI for social good. Despite the
fact that the integration of a standardization constitutes an
important step toward a responsible use of AI for social
good, standards are non-binding and their is no guarantee
of their correct application within private or institutional
organizations. As a consequence, it is essential to propose
tools to ensure the conformity of the acts of an organi-
zation that specifically showcases its compliance with a
responsible AI program, or to at least inform the public
and the stakeholders of the organizations that commit or
not. In fact, the principle of certification is to give citizens
(i.e. customers, users, administrators) an insurance on the
quality of a service or a product. The certification brings
the evidence, testifies, establishes the respect of a reference
frame. In this extent, new control centers must be developed.
In particular, it is important to dissociate the stan-
dardization, that defines the process of conception
and production of reference materials (e.g standards)
from the certification that denotes the conformity as-
sessment obtained by an entity from a third-party
body with respect to specific standards (Grenard 1996;
Krzan et al. 2006). In fact, the latter certification bodies are
part of the voluntary standard-setting instruments due to the
fact that organizations choose to comply without any legal
obligation. As an example, environmental protection often
involves to demonstrate on the packaging of a product its
attachment in the fight against climate change, through a
label, or by highlighting the efforts made to limit the CO2
consumption, or even by showcasing its involvement to
a sustainable development program aiming at promoting
the recourse to local producers (Hautereau-Boutonnet and
Porchy-Simon 2019). Thereby, it is possible to follow the
same approach, but with AI products and toward the use
of social good for AI (Rolnick et al. 2019). However, it is
necessary to recall that most of the certifications provide
the compliance guarantee with certain standards, including
private standards. Thus, one must pay attention to the stan-
dards whose certification ensures compliance. In addition,
the major certification centers are also voluntary standard-
ization centers. Therefore, they ensure compliance with the
rules and standards they have established, in accordance
with the regulations in force, but do not guarantee behaviors
established and controlled by the public authorities. The
source of certified standards and the defended interests
being distinct, differentiation is more than necessary. While
certifications must be used with cautions, they represent an
interesting tool to encourage those involved in AI to guide
its development toward social good.
Finally, it is admitted and well-known that voluntary
standards and soft law techniques do not replace legal
regulations. Nevertheless, they enable the civil society to
opt for an important and effective virtuous circle driving
the development of AI toward social good as illustrated in
Figure 1.
Conclusion
In this paper, it has been shown that the lack of a legal sys-
tem, despite the crucial need for frameworks and standards
regarding AI, leads to the rise of a private standardization.
The latter voluntary standardization enables, through the nu-
merous and various tools introduced in this work (summa-
rized in Figure 1), to gradually build foundations for the fu-
ture legal edifice. The awareness raising of the civil society
and the actors of the field to the AI-related risks allows the
establishment of voluntary ethical and technical standards.
These standards lead to a change in the behavior of the actors
that adhere to them but also impact the behavior of their em-
ployees and partners. A virtuous circle illustrating the role
that everyone can play in the development and use of artifi-
cial intelligence for social good.
Research
Education
Public Debate
Ethical Committees
Social Good Culture
Marketing
Partnerships
Certications
Trade Relationships
Employment Contracts
Ethical Standards
Technical Standards
Awareness
Standardization
Incitment
Figure 1: Illustration of the virtuous model based on soft law
techniques for the development of an AI for social good.
These soft law techniques can boost the international
normative activity by extending its scope, particularly in
the context of emerging legal areas such as AI. Indeed,
these tools have various advantages including flexibility and
adaptation that facilitate their development and application.
Based on soft law techniques, the different AI actors can
directly act on their own behaviors, but also on their collab-
orators actions to encourage an AI for the social good. An
important interest for soft law that is easily explained by its
lightened nature on the procedural level and its faculty of ex-
tension. Nevertheless, the gap between the flexibility of this
process and the power of action it provides represents a risk
of legal protection for the civil society. It is therefore cru-
cial to keep in mind that voluntary standards, despite their
multiple benefits, must remain a temporary alternative or an
accompaniment to the adoption of legally binding standards.
Indeed, if they guarantee a form of social good to artificial
intelligence, they do not guarantee the respect for the com-
mon good, which is protected by government standards. Fur-
ther research is thus needed on the AI governance issue: the
problem of devising global norms, policies, and institutions
to better ensure the beneficial development and use of AI
advances (Dafoe 2017).
References
AFNOR. 2019. AFNOR, AI standards. https:
//norminfo.afnor.org/search?typeResult=normes-
etude&commissionID=127690. Accessed: 2019-08-20.
Albrecht, J. P. 2016. How the gdpr will change the world.
Eur. Data Prot. L. Rev. 2:287.
Bellon, B. 2002. Quelques fondements de l’intelligence
´
economique. Revue d’´
economie industrielle 98(1):55–74.
Bolukbasi, T.; Chang, K.-W.; Zou, J. Y.; Saligrama, V.; and
Kalai, A. T. 2016. Man is to computer programmer as
woman is to homemaker? debiasing word embeddings. In
Advances in Neural Information Processing Systems, 4349–
4357.
Castka, P.; Bamber, C. J.; Bamber, D. J.; and Sharp, J. M.
2004. Integrating corporate social responsibility (csr) into
iso management systems–in search of a feasible csr man-
agement system framework. The TQM magazine 16(3):216–
224.
Cazala, J. 2011. Le soft law international entre inspiration
et aspiration. Revue interdisciplinaire d’etudes juridiques
66(1):41–84.
Costa, O.; Jabko, N.; Lequesne, C.; and Magnette, P. 2001.
La diffusion des m´
ecanismes de contrˆ
ole dans l’union eu-
rop´
eenne: vers une nouvelle forme de d´
emocratie? Revue
franc¸aise de science politique 51(6):859–866.
Dafoe, A. 2017. Ai governance: A research agenda. Revue
franc¸aise de science politique.
Dahlsrud, A. 2008. How corporate social responsibility is
defined: an analysis of 37 definitions. Corporate social re-
sponsibility and environmental management 15(1):1–13.
Darnault, C. 2018. Les PME face au contentieux
´
economique: essai de guide pratique. Ph.D. Dissertation,
Aix-Marseille.
Deharo, G. 2011. ingenierie contractuelle et performance de
l’entreprise: perspective ´
economique et dynamique de droit
des contrats.
Delmas, M. A., and Terlaak, A. K. 2001. A framework for
analyzing environmental voluntary agreements. California
management review 43(3):44–63.
Dupuy, P.-M. 1990. Soft law and the international law of
the environment. Mich. J. Int’l L. 12:420.
Ferrara, E.; Varol, O.; Davis, C.; Menczer, F.; and Flammini,
A. 2016. The rise of social bots. Communications of the
ACM 59(7):96–104.
Ferry-Maccario, N.; Kleinheisterkamp, J.; Lenglart, F.; and
Stolowy, N. 2006. Gestion juridique de l’entreprise. Pearson
Education France.
Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.;
Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo,
U.; Rossi, F.; et al. 2018. Ai4people—an ethical framework
for a good ai society: opportunities, risks, principles, and
recommendations. Minds and Machines 28(4):689–707.
Ganascia, J.-G. 2017. Le Mythe de la Singularit´
e. Faut-il
craindre l’intelligence artificielle? Le Seuil.
Grenard, A. 1996. Normalisation, certification: quelques
´
el´
ements de d´
efinition. Revue d’´
economie industrielle
75(1):45–60.
Guzman, A. T., and Meyer, T. L. 2010. International soft
law. Journal of Legal Analysis 2(1):171–225.
Harwell, D. 2018. Google to drop Pentagon AI
contract. https://www.washingtonpost.com/news/the-
switch/wp/2018/06/01/google-to-drop-pentagon- ai-
contract-after-employees-called-it-the-business-of-war/.
Accessed: 2019-08-20.
Hautereau-Boutonnet, M., and Porchy-Simon, S. 2019. Le
changement climatique, quel rˆ
ole pour le droit priv´
e? Dal-
loz.
Hay, B. L.; Stavins, R. N.; and Vietor, R. H. 2005. Envi-
ronmental protection and the social responsibility of firms:
perspectives from law, economics, and business. Resources
for the Future.
Heisenberg, D. 2005. Negotiating privacy: The Euro-
pean Union, the United States, and personal data protection.
Lynne Rienner Publishers Boulder, CO.
Helfrich, V. 2008. La r´
egulation des pratiques de rse par
les normes: le cas de la norme iso 26000 sur la respons-
abilit´
e sociale. 5e Congr`
es de l’ADERSE” Transversalit´
e de
la RSE: L’entreprise `
a l’aune de ses responsabilit´
es vis-`
a-vis
de l’homme, de l’environnement et du profit.
Howard, J.; Nash, J.; and Ehrenfeld, J. 2000. Standard or
smokescreen? implementation of a voluntary environmental
code. California Management Review 42(2):63–82.
Idowu, S. O.; Capaldi, N.; Zu, L.; and Gupta, A. D. 2013.
Encyclopedia of corporate social responsibility, volume 21.
Springer New York.
IPAI. 2018. IPAI, mandate for the International Panel
on Artificial Intelligence. https://pm.gc.ca/en/news/
backgrounders/2018/12/06/mandate-international-panel-
artificial-intelligence. Accessed: 2019-08-20.
ISO. 2019. ISO, AI standards. https://www.iso.org/
committee/6794475/x/catalogue/p/0/u/1/w/0/d/0. Accessed:
2019-08-20.
James, H. S. 2000. Reinforcing ethical decision making
through organizational structure. Journal of Business Ethics
28(1):43–58.
Johansson, J., and Lidestav, G. 2011. Can voluntary stan-
dards regulate forestry?—assessing the environmental im-
pacts of forest certification in sweden. Forest Policy and
Economics 13(3):191–198.
Johnson, D. G., and Noorman, M. E. 2014. Responsibility
practices in robotic warfare. Military Review 94(3):12.
Krishnan, A. 2016. Killer robots: legality and ethicality of
autonomous weapons. Routledge.
Krzan, A.; Hemjinda, S.; Miertus, S.; Corti, A.; and
Chiellini, E. 2006. Standardization and certification in the
area of environmentally degradable plastics. Polymer degra-
dation and stability 91(12):2819–2833.
Levin, S. 2019. Google AI ethics council. https:
//www.theguardian.com/technology/2019/apr/04/google-ai-
ethics-council-backlash. Accessed: 2019-08-20.
Manyika, J. 2017. A future that works: Ai automation em-
ployment and productivity. McKinsey Global Institute Re-
search, Tech. Rep.
Montr´
eal. 2018. Montreal declaration for a responsi-
ble development of artificial intelligence. https://www.
montrealdeclaration-responsibleai.com.
Nurit-Pontier, L., and Rousseau, S. 2012. Risques
d’entreprise : quelle strat´
egie juridique ? L.G.D.J.
Pellet, A. 2018. Les raisons du d´
eveloppement du soft law
en droit international: choix ou n´
ecessist´
e?
Posner, E. A. 2002. Economic analysis of contract law after
three decades: Success or failure. Yale LJ 112:829.
Rolnick, D.; Donti, P. L.; Kaack, L. H.; Kochanski, K.;
Lacoste, A.; Sankaran, K.; Ross, A. S.; Milojevic-Dupont,
N.; Jaques, N.; Waldman-Brown, A.; et al. 2019. Tack-
ling climate change with machine learning. arXiv preprint
arXiv:1906.05433.
Schlenoff, C.; Prestes, E.; Madhavan, R.; Goncalves, P.; Li,
H.; Balakirsky, S.; Kramer, T.; and Miguelanez, E. 2012. An
ieee standard ontology for robotics and automation. In 2012
IEEE/RSJ International Conference on Intelligent Robots
and Systems, 1337–1342. IEEE.
Shead, S. 2019. Facebook AI ethics institute.
https://www.forbes.com/sites/samshead/2019/01/20/
facebook-backs-university-ai-ethics- institute-with-7-5-
million/#5e087d8e1508. Accessed: 2019-08-20.
Taddeo, M., and Floridi, L. 2018. How ai can be a force for
good. Science 361(6404):751–752.
Tankard, C. 2016. What the gdpr means for businesses.
Network Security 2016(6):5–8.
Tual, M., and Fagot, V. 2018. Intelligence artificielle : ce
qu’il faut retenir du rapport de c´
edric villani. Le Monde.
UNESCO. 2018. Intelligence artificielle: promesses et men-
aces. Courrier de l’UNESCO (3):70.
Wiblin, R. 2017. Positively shaping the development of arti-
ficial intelligence. https://80000hours.org/problem-profiles/
positively-shaping-artificial-intelligence/. Accessed: 2017-
03-01.
Yuste, R.; Goering, S.; Bi, G.; Carmena, J. M.; Carter, A.;
Fins, J. J.; Friesen, P.; Gallant, J.; Huggins, J. E.; Illes, J.;
et al. 2017. Four ethical priorities for neurotechnologies
and ai. Nature News 551(7679):159.
Zhao, J.; Wang, T.; Yatskar, M.; Ordonez, V.; and Chang,
K.-W. 2017. Men also like shopping: Reducing gender bias
amplification using corpus-level constraints. In Proceedings
of the EMNLP.
Zhao, J.; Wang, T.; Yatskar, M.; Ordonez, V.; and Chang, K.-
W. 2018. Gender bias in coreference resolution:evaluation
and debiasing methods. In Proceedings of the NAACL.
ResearchGate has not been able to resolve any citations for this publication.
Thesis
Full-text available
L’entreprise. Pour un dirigeant-entrepreneur de petite et moyenne entreprise, les choses vont bien au-delà d’une simple entité économique ; il s’agit d’avantage d’une idée, d’un projet, d’une vie, plus que de simples considérations économiques, dont la gestion au quotidien n’est pas de tout repos. En plus de la maîtrise du marché économique au sein duquel elle évolue, le dirigeant de l’entreprise doit également s’intéresser à d’autres préoccupations pour assurer la pérennité de son organisation, et notamment son environnement juridique. Celui-ci, souvent complexe et méconnu, est une source de risques pour l’entreprise et son dirigeant, notamment le risque juridique de contentieux économique. Alors comment éviter la banqueroute ? Et bien, les dernières réformes législatives, traduisant les profondes mutations en cours en matière de procédure civile, apportent des instruments de gouvernance juridique de l’entreprise permettant à son dirigeant de lutter contre le risque de contentieux économique. Comment ? Les réformes tendant à la responsabilisation par l’instauration d’une obligation de prévention des risques et à l’autonomisation de la gestion du contentieux par la participation directe à la résolution des différends, l’entreprise n’a d’autre choix que de mettre en œuvre une gouvernance juridique qui participe à la lutte contre le contentieux économique. L’objectif de cette contribution étant de proposer un guide de gouvernance juridique de l’entreprise, tantôt par la mise en œuvre d’un plan de vigilance aux fins d’éviter la survenance d’un risque juridique, tantôt par une résolution déjudiciarisée, via le développement des modes amiables et alternatifs de résolution des différends, dès lors qu’un risque survient, pour éviter d’être confronté au contentieux économique, entendu en tant que procès civil traditionnel. Un tour d’horizon des possibilités qui s’offrent aux dirigeants-entrepreneurs de petites et moyennes entreprises, pour un développement économique sécurisé, et assurer la pérennité de l’organisation dans un environnement juridique et social en perpétuelle évolution, brisant ainsi les frontières traditionnelles de la Justice.
Article
Full-text available
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.
Article
Full-text available
This article argues that an ethical framework will help to harness the potential of AI while keeping humans in control.
Article
Full-text available
Artificial intelligence and brain–computer interfaces must respect and preserve people's privacy, identity, agency and equality, say Rafael Yuste, Sara Goering and colleagues.
Article
Full-text available
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.