ArticlePDF Available

Abstract

Human resource information systems (HRIS) have become a major MIS subfunction within the personnel areas of many large corporations. This article traces the development of HRIS as an entity independent of centralized MIS, assesses its current operation ...
Four Ethical Issues of the Information Age
by Richard O. Mason
Today in western societies more people are employed collecting, handling and distributing
information than in any other occupation. Millions of computers inhabit the earth and many
millions of miles of optical fiber, wire and air waves link people, their computers and the vast
array of information handling devices together. Our society is truly an information society,
our time an information age. The question before us now is whether the kind of society being
created is the one we want. It is a question that should especially concern those of us in the
MIS community for we are in the forefront of creating this new society.
There are many unique challenges we face in this age of information. They stem from the
nature of information itself. Information is the means through which the minds expands and
increases its capacity to achieve its goals, often as the result of an input from another mind.
Thus, information forms the intellectual capital from which human beings craft their lives and
secure dignity.
However, the building of intellectual capital is vulnerable in many ways. For example,
people's intellectual capital is impaired whenever they lose their personal information without
being compensated for it, when they are precluded access to information which is of value to
them, when they have revealed information they hold intimate, or when they find out that the
information upon which their living depends is in error. The social contract among people in
the information age must deal with these threats to human dignity. The ethical issues involved
are many and varied, however, it is helpful to focus on just four. These may be summarized
by means of an acronym --
PAPA.
P
rivacy: What information about one's self or one's associations must a person reveal to
others, under what conditions and with what safeguards? What things can people keep to
themselves and not be forced to reveal to others?
A
ccuracy: Who is responsible for the authenticity, fidelity and accuracy of information?
Similarly, who is to be held accountable for errors in information and how is the injured party
to be made whole?
P
roperty: Who owns information? What are the just and fair prices for its exchange? Who
owns the channels, especially the airways, through which information is transmitted? How
should access to this scarce resource be allocated?
A
ccessibility: What information does a person or an organization have a right or a privilege to
obtain, under what conditions and with what safeguards?
Privacy
What information should one be required to divulge about one's self to others? Under what
conditions? What information should one be able to keep strictly to one's self? These are
among the questions that a concern for privacy raises. Today more than ever cautious citizens
must be asking these questions.
Two forces threaten our privacy. One is the growth of information technology, with its
enhanced capacity for surveillance, communication, computation, storage, and retrieval. A
second, and more insidious threat, is the increased value of information in decision-making.
Information is increasingly valuable to policy makers; they covet it even if acquiring it
invades another's privacy.
A case in point is the situation that occurred a few years ago in Florida. The Florida
legislature believed that the state's building codes might be too stringent and that, as a result,
the taxpayers were burdened by paying for buildings which were underutilized. Several
studies were commissioned. In one study at the Tallahassee Community College, monitors
were stationed at least one day a week in every bathroom.
Every 15 seconds, the monitor observed the usage of the toilets, mirrors, sinks and other
facilities and recorded them on a form. This data was subsequently entered into a data base for
further analyses. Of course the students, faculty and staff complained bitterly, feeling that this
was an invasion of their privacy and a violation of their rights. State officials responded
however, that the study would provide valuable information for policy making. In effect the
State argues that the value of the information to the administrators was greater than any
possible indignities suffered by the students and others. Soon the ACLU joined the fray. At
their insistence the study was stopped, but only after the state got the information it wanted.
Most invasions of privacy are not this dramatic or this visible. Rather, they creep up on us
slowly as, for example, when a group of diverse files relating to a person and his or her
activities are integrated into a single large database. Collections of information reveal intimate
details about a person and can thereby deprive the person of the opportunity to form certain
professional and personal relationships. This is the ultimate cost of an invasion of privacy. So
why do we integrate databases in the first place. It is because the bringing together of
disparate data makes the development of new information relationships possible. These new
relationships may be formed, however, without the affected parties' permission. You or I may
have contributed information about ourselves freely to each of the separate databases but that
by itself does not amount to giving consent to someone to merge the data, especially if that
merger might reveal something about us.
Consider the story that was circulating during the early 1970s. It's probably been embellished
in the retellings but it goes something like this. It seems that a couple of programmers at the
city of Chicago's computer center began matching tape files from many of the city's different
data processing applications on name and I.D. They discovered, for example, that several high
paid city employers had unpaid parking fines. Bolstered by this revelation they pressed on.
Soon they uncovered the names of several employees who were still listed on the register but
who had not paid a variety of fees, a few of whom appeared in the files of the alcoholic and
drug abuse program. When this finding was leaked to the public the city employees, of course
were furious. They demanded to know who had authorized the investigation.
The answer was that no one knew. Later, city officials established rules for the computer
center to prevent this form of invasion of privacy from happening again. In light of recent
proposals to develop a central federal databank consisting of files from most U.S. government
agencies, this story takes on new meaning. It shows what can happen when a group of eager
computer operators or unscrupulous administrators start playing around with data.
The threat to privacy here is one that many of us don't fully appreciate. I call it the threat of
exposure by minute description. It stems from the collection of attributes about ourselves and
use of the logical connector "and". For example, I may authorize one institution to collect
information "A" about me, and another institution to collect information "B" about me; but I
might not want anyone to possess "A and B" about me at the same time. When "C" is added to
the list of conjunctions, the possessor of the new information will know even more about me.
And then "D" is added and so forth. Each additional weaving together of my attributes reveals
more and more about me. In the process, the fabric that is created poses a threat to my
privacy.
The threads which emanate from this foreboding fabric usually converge in personnel files
and in dossiers, as Aleksandr Solzhenitsyn describes in The Cancer Ward: "... Every person
fills out quite a few forms in his life, and each form contains an uncounted number of
questions. The answer of just one person to one question in one form is already a thread
linking that person forever with the local center of the dossier department. Each person thus
radiates hundreds of such threads, which all together, run into the millions. If these threads
were visible, the heavens would be webbed with them, and if they had substance and
resilience, the buses, street-cars and the people themselves would no longer be able to move...
They are neither visible, nor material, but they were constantly felt by man... Constant
awareness of these invisible threads naturally bred respect for the people in charge of that
most intricate dossier department. It bolstered their authority." [1, p.221]
The threads leading to Americans are many. The United State Congress' Privacy Protection
Commission, chaired by David F. Linowes, estimated that there are over 8,000 different
record systems in the files of the federal government that contain individually identifiable data
on citizens. Each citizen, on average, has 17 files in federal agencies and administrations.
Using these files, for example, Social Security data has been matched with Selective Service
data to reveal draft resisters. IRS data has been matched with other administrative records to
tease out possible tax evaders. Federal employment records have been matched with
delinquent student loan records to identify some 46, 860 federal and military employees and
retirees whose pay checks might be garnished. In Massachusetts welfare officials sent tapes
bearing welfare recipients Social Security numbers to some 117 banks to find out whether the
recipients had bank accounts in excess of the allowable amount. During the first pass some
1600 potential violators were discovered.
Computer matching and the integration of data files into a central databank have enormous
ethical implications. On the one hand, the new information can be used to uncover criminals
and to identify service requirements for the needy. On the other hand, it provides powerful
political knowledge for those few who have access to it and control over it. It is ripe for
privacy invasion and other abuses. For this reason many politicians have spoken out against
centralized governmental databanks. As early as 1966 Representative Frank Horton of New
York described the threat as follows: "The argument is made that a central data bank would
use only the type of information that now exists and since no new principle is involved,
existing types of safe-guards will be adequate. This is fallacious. Good computer men know
that one of the most practical of our present safeguards of privacy is the fragmented nature of
present information. It is scattered in little bits and pieces across the geography and years of
our life. Retrieval is impractical and often impossible. A central data bank removes
completely this safeguard. I have every confidence that ways will be found for all of us to
benefit from the great advances of the computer men, but those benefits must never be
purchases at the price of our freedom to live as individuals with private lives..." [2, p.6].
There is another threat inherent in merging data files. Some of the data may be in error. More
than 60,000 state and local agencies, for example provide information tot he National Crime
Information Center and it is accessed by law officers nearly 400,000 times a day. Yet studies
show that over 4% of the stolen vehicle entries, 6% of the warrant entries, and perhaps as
much as one half of the local law enforcement criminal history records are in error. At risk is
the safety of the law enforcement officers who access it, the effectiveness of the police in
controlling crime, and the freedom of the citizens who names appear in the files. This leads to
a concern for accuracy.
Accuracy
Misinformation has a way of fouling up people's lives, especially when the party with the
inaccurate information has an advantage in power and authority. Consider the plight of one
Louis Marches. Marches, an immigrant, was a hard working man who, with his wife Eileen,
finally saved enough money to purchase a home in Los Angeles during the 1950s. They took
out a long term loan from Crocker National Bank. Every month Louis Marches would walk to
his neighborhood bank, loan coupon book in hand, to make his payment of $196.53. He
always checked with care to insured that the teller had stamped "paid" in his book on the
proper line just opposite the month for which the payment was due. And he continued to do
this long after the bank had converted to its automated loan processing system.
One September a few years ago Marches was notified by the bank that he had failed to make
his current house payment. Marches grabbed his coupon book, marched to the bank and, in
broken English that showed traces of his country heritage, tried to explain to the teller that this
dunning notice was wrong. He had made his payment he claimed. The stamp on his coupon
book proved that he had paid. The teller punched Marches' loan number on the keyboard and
reviewed the resulting screen. Unfortunately she couldn't confirm Marches' claim, nor
subsequently could the head teller, nor the branch manager. When faced with a computer
generated screen that clearly showed that his account was delinquent, this hierarchy of
bankers simply ignored the entries recorded in his coupon book and also his attendant raving.
Confused, Marches left the bank in disgust.
In October, however, Marches dutifully went to the bank to make his next payment. He was
told that he could not make his October payment because he was one month in arrears. He
again showed the teller his stamped coupon book. She refused to accept it and he stormed out
of the bank. In November he returned on schedule as he had done for over 20 years and tried
to make his payment again, only to be told that he was now two months in arrears. And so it
went until inevitably the bank foreclosed. Eileen learned of the foreclosure from an
overzealous bank debt collector while she was in bed recovering from a heart attack. She
collapsed upon hearing the news and suffered a near fatal stroke which paralyzed her right
side. Sometime during this melee Marches, who until this time had done his own legal work,
was introduced to an attorney who agreed to defend him. They sued the bank. Ultimately,
after months of anguish, the Marches received a settlement for $268,000. All that the bank
officials who testified could say was, "Computers make mistakes. Banks make mistakes, too."
A special burden is placed on the accuracy of information when people rely on it for matters
of life and death, as we increasingly do. This came to light in a recent $3.2 million lawsuit
charging the National Weather Service for failing to predict accurately a storm that raged on
the southeast slope of Georges Bank in 1980. As Peter Brown steered his ship - the Sea Fever
- from Hyannis Harbor toward his lobster taps near Nova Scotia, he monitored weather
conditions using a long range, single sideband radio capable of receiving weather forecasts at
least 100 miles out to sea. The forecasts assured him that his destination area near Georges
Bank, although it might get showers, was safe from the hurricane-like storm that the weather
bureau had predicted would go far to the east of his course. So kept to his course. Soon,
however, his ship was engulfed in howling winds of 80 knots and waves cresting at 60 feet. In
the turbulence Gary Brown, a crew member, was washed overboard.
The source of the fatal error was failure of a large scale information system which collects
data from high atmosphere balloons, satellites, ships, and a series of buoys. This data is then
transmitted to a National Oceanographic and Atmospheric Administration computer which
analyzes it and produces forecasts. The forecasts, in turn, are broadcast widely.
The forecast Peter Brown relied on when he decided to proceed into the North Atlantic was in
error because just one buoy - station 44003 Georges Bank - was out of service. As a result the
wind speed and direction data it normally provided were lost the computer model. This caused
the forecast trajectory of the storm to be canted by several miles, deceiving skipper Peter
Brown and consequently sending Gary Brown to his death.
Among the questions this raises for us in the information age are these: "How many Louis
Marches and Gary Browns are there out there?" "How many are we creating every day?" The
Marches received a large financial settlement; but can they ever be repaid for the irreparable
harm done to them and to their dignity? Honour Brown, Gary's widow, received a judgment
in her case; but has she been repaid for the loss of Gary? The point is this: We run the risk of
creating Gary Browns and Louis Marches every time we design information systems and
place information in databases which might be used to make decisions. So it is our
responsibility to be vigilant in the pursuit of accuracy in information. Today we are producing
so much information about so many people and their activities that our exposure to problems
of inaccuracy is enormous. And this growth in information also raises another issue: Who
owns it?
Property
One of the most complex issues we face as a society is the question of intellectual property
rights. There are substantial economic and ethical concerns surrounding these rights; concerns
revolving around the special attributes of information itself and the means by which it is
transmitted. Any individual item of information can be extremely costly to produce in the first
instance. Yet, once it is produced, that information has the illusive quality of being easy to
reproduce and to share with others. Moreover, this replication can take place without
destroying the original. This makes information hard to safeguard since, unlike tangible
property, it becomes communicable and hard to keep it to one's self. It is even difficult to
secure appropriate reimbursements when somebody else uses your information.
We currently have several imperfect institutions that try to protect intellectual property rights.
Copyrights, patents, encryption, oaths of confidentiality, and such old fashioned values as
trust worthiness and loyalty are the most commonly used protectors of our intellectual
property. Problem issues, however, still abound in this area. Let us focus on just one aspect:
artificial intelligence and its expanding subfield, expert systems.
To fully appreciate our moral plight regarding expert systems it is necessary to run back the
clock a bit, about two hundred years, to the beginnings of another society: the steam energy-
industrial society. From this vantage point we may anticipate some of the problems of the
information society.
As the industrial age unfolded in England and Western Europe a significant change took place
in the relationship between people and their work. The steam engine replaced man power by
reducing the level of personal physical energy required to do a job. The factory system, as
Adam Smith described in his essay on the pin factory, effectively replaced the laborer's
contribution of his energy and of his skills. This was done by means of new machines and
new organization forms. The process was carried even further in the French community of
Lyon. There, Joseph Marie Jacquard created a weaving loom in which a system of
rectangular, punched holes captured the weaver's skill for directing the loom's mechanical
fingers and for controlling the warp and weft of the threads. These Jacquard looms created a
new kind of capital which was produced by disembodying energy and skill from the
craftsmen and then reembodying it into the machines. In effect, an exchange of property took
place. Waving skills were transferred from the craftsman to the owner of the machines. With
this technological innovation Lyon eventually regained its position as one of the leading silk
producers in the world. The weavers themselves, however, suffered unemployment and
degradation because their craft was no longer economically viable. A weavers value as a
person and a craftsman was taken away by the new machines.
There is undoubtedly a harbinger of things to come in these 18th century events. As they
unfolded civilization witnessed on of the greatest outpouring of moral philosophy it has ever
seen: Adam Smith's Theory of Moral Sentiments and his Wealth of Nations; the American
revolution and its classic documents on liberty and freedom; the French revolution and its
concern for fraternity and equality; John Stuart Mill and Jeremy Bentham and their ethical
call for the greatest good for the greatest number, and Immanuel Kant and his categorical
imperative which leads to an ethical utopia called the "kingdom of ends." All of this ethical
initiative took place within the historically short span of time of about 50 years. Common to
these ideas was a spirit which sough a new meaning in human life and which demanded that a
just allocation be made of social resources
Today that moral spirit may be welling up within us again. Only this time it has a different
provocateur. Nowhere is the potential threat to human dignity so severe as it is in the age of
information technology, especially in the field of artificial intelligence. Practitioners of
artificial intelligence proceed by extracting knowledge form experts, workers and the
knowledgeable, and then implanting it into computer software where it becomes capital in the
economic sense. This process of "disemminding" knowledge from an individual, and
subsequently "emminding" it into machines transfers control of the property to those who own
the hardware and software. Is this exchange of property warranted? Consider some of the
most successful commercial artificial intelligence systems of the day. Who owns, for
example, the chemical knowledge contained in DYNDREL, the medical knowledge contained
in MYCIN, or the geological knowledge contained in PROSPECTOR. How is the contributor
of his knowledge to be compensated? These are among the issues we must resolve as more
intelligent information systems are created.
Concern over intellectual property rights relates to the content of information . There are some
equally pressing property rights issues surrounding the conduits through with information
passes. Bandwidth, the measure of capacity to carry information, is a scarce and ultimately
fixed commodity. It is a "commons". A commons is like an empty vessel in to which drops of
water can be placed freely and easily until it fills and overflows. Then its capacity is gone. As
a resource it is finite.
In an age in which people benefit by the communication of information, there is a tendency
for us to treat bandwidth and transmission capacity as a commons in the same way as did the
herdsmen in Garrett Hardin's poignant essay, "The Tragedy of the Commons," (subtitled:
"The population problem has no technical solution; it requires a fundamental extension in
morality). Each herdsman received direct benefits from adding an animal to a pasture shared
in common. As long as there was plenty of grazing capacity the losses due to the animal's
consumption were spread among them and felt only indirectly and proportionally much less.
So each herdsman was motivated to increase his flock. In the end, however, the commons was
destroyed and everybody lost.
Today our airways are becoming clogged with a plethora of data, voice, video, and message
transmission. Organizations and individuals are expanding their use of communications
because it is profitable for them to do so. But if the social checks on the expanded use of
bandwidth are inadequate, and a certain degree of temperance isn't followed, we may find that
jamming and noise will destroy the flow of clear information through the air. How will the
limited resource of bandwidth be allocated? Who will have access? This leads us to the fourth
issue.
Access
Our main avenue to information is through literacy. Literacy, since about 1500 A.D. when the
Syrians first conceived a consonant alphabet, has been a requirement for full participation in
the fabric of society. Each innovation in information handling, from the invention of paper to
the modern computer, has placed new demands on achieving literacy. In an information
society a citizen must possess at least three things to be literate:
- One must have the intellectual skills to deal with information. These are skill such as
reading, writing, reasoning, and calculating. This is a task for education.
- One must have access to the information technologies which store, convey and process
information. This includes libraries, radios, televisions, telephones, and increasingly, personal
computers or terminals linked via networks to mainframes. This is a problem in social
economics.
- Finally, one must have access to the information itself. This requirement returns to the issue
of property and is also a problem in social economics.
These requirements for literacy are a functioon of both the knowledge level and the economic
level of the individual. Unfortunately, for many people in the world today both of these levels
are currently deteriorating.
These are powerful factors working both for and against contemporary literacy in our
organizations and in our society. For example, the cost of computation, as measured in, say
dollars per MIPS (millions of instructions per second,), has gone down exponentially since the
introduction of computers. This trend has made technology more accessible and economically
attainable to more people. However, corporations and other public and private organizations
have benefited the most from these economies. As a result, cost economies in computation are
primarily available to middle and upper income people. At the same time computer usage
flourishes among some, we are creating a large group of information poor people who have
no direct access to the more efficient computational technology and who have little training in
its use.
Reflect for a moment on the social effects of electronically stored databases. Prior to their
invention, vast quantities of data about publications, news events, economic and social
statistics, and scientific findings have been available in printed, microfilm, or microfiches
form at a relatively low cost. For most of us access to this data has been substantially free. We
merely went to our public or school library. The library, in turn, paid a few hundred dollars
for the service and made it available to whomever asked for it. Today, however, much of this
information is being converted to computerized databases and the cost to access these
databases can run in the thousands of dollars.
Frequently, access to databases is gained only by means of acquiring a terminal or personal
computer. For example, if you want access to the New York Times Index through the Mead
Corporation service you must first have access to a terminal and communication line and then
pay additional hook-up and access fees in order to obtain the data. This means that the people
who wish to use this service possess several things. First, they know that the database exists
and how to use it. Second, they have acquired the requisite technology to access it. And third,
they are able to pay the fees for the data. Thus the educational and economic ante is really
quite high for playing the modern information game. Many people cannot or choose not to
pay it and hence are excluded from participating fully in our society. In effect, they become
information "drop outs" and in the long run will become the source of many social problems.
PAPA
Privacy, accuracy, property and accessibility, these are the four major issues of information
ethics for the information age. Max Plank's 1900 conception that energy was released in small
discrete packets called "quanta" not only gave rise to atomic theory but also permitted the
development of information technology as well. Semiconductors, transistors, integrated
circuits, photoelectric cells, vacuum tubes, and ferrite cores are among the technological yield
of this scientific theory. In a curious way, quantum theory underlies the four issues as well.
Plank's theory, and all that followed it, have led us to a point where the stakes surrounding
society's policy agenda are incredibly high. At stake with the use of nuclear energy is the very
survival of mankind itself. If we are unwise we will either blow our severs up or contaminate
our world forever with nuclear waste. At stake with the increased use of information
technology is the quality of our lives should we, or our children, survive. If we are unwise
many people will suffer information bankruptcy or desolation.
Our moral imperative is clear. We must insure that information technology, and the
information it handles, are used to enhance the dignity of mankind. To achieve these goals we
must formulate a new social contract, one that insures everyone the right to fulfill his or her
own human potential.
In the new social contract, information systems should not unduly invade a person's privacy to
avoid the indignities that the students in Tallahassee suffered.
Information systems must be accurate to avoid the indignities the Marches and the Browns
suffered.
Information systems should protect the viabilit of the fixed conduit resource through which it
is transmitted to avoid noise and jamming pollution and the indignitties of "The Tragedy of
the Commons".
Information systems should protect the sanctity of intellectual property to avoid the
indignities of unwitting "disemmindment" of knowledge from individuals.
And information systems should be accessible to avoid the indignities of information literacy
and deprivation.
This is a tall order; but it is one that we in the MIS community should address. We must
assume some responsibility for the social contract that emerges from the systems that we
design and implement. In summary, we must insure that the flow of those little packets of
energy and information called quanta, that Max Plank bequested to us some 85 years ago, are
used to create the kind of world in which we wish to live.
References:
1. Solzhenitsyn, Aleksandr I., The Cancer Ward, Dial Press, New York, New York,
1968.
2. U.S. House of Representatives, The Computer and Invasion of Privacy, U.S.
Government Printing Office, Washington, D.C., 1966.
Written by:
Richard O. Mason
Carr P. Collins Distinguished Professor of Management Information Sciences
Edwin L. Cox School of Business
Southern Methodist University
Dallas, TX
... 1. Given that AI mirrors other IT applications in pushing machine decisions-actions further from human direct agency, what can we learn and apply from prior ethics frameworks (particularly Mason's (1986) PAPA framework) without needing to reinvent the wheel? 2. Where AI embodies new generative and inferential capabilities, what issues emerge that need to be addressed in specific AI ethics guidelines and practices? ...
... In the following sections, we start by discussing the PAPA framework of Mason (1986) and basic definitions provided by Habermas to set the stage for a more detailed consideration of AI ethics. We present three perspectives on AI and give an ethical perspective on each of the three views. ...
... We conclude with summative comments about AI and ethics, particularly implications based on the generative capabilities specific to AI systems. Mason (1986) produced a seminal categorization of ethical issues particular to IS. He called them the "PAPA" issues: privacy, accuracy, property, and accessibility. ...
Article
Full-text available
This paper reflects on what differentiates AI ethics issues from concerns raised by all IS applications. AI ethics issues can be viewed in three distinct categories. One can view AI as another IS application like any other. We examine this category of AI applications focusing primarily on Mason’s (MIS Quarterly, 10, 5–12, 1986) PAPA framework as a way to position AI ethics within the IS domain. One can also view AI as adding a generative capacity producing outputs that cannot be pre-determined from inputs and code. We examine this by adding “inference” to the informational pyramid and exploring its implications. AI can also be viewed as a basis for reexamining questions of the nature of mental phenomena such as reasoning and imagination. At this time, AI-based systems seem far from replicating or replacing human capabilities. However, if/when such abilities emerge as computing machinery continues growing in capacity and capability, it will be helpful to have anticipated arising ethical issues and developed plans for avoiding, detecting, and resolving them to the extent possible.
... Information ethics not only influences people's behaviors in the production, intermediation, and use of information but also establishes principles for information services governing business and professional practices (Parker et al., 1990;García-Holgado et al., 2021). Related issues, including the privacy, accuracy, property, and accessibility of information, have been widely discussed in previous studies (Mason, 1986;Fallis, 2007;Tavani, 2016), and the global coronavirus pandemic in the past 2 years has greatly sparked public concerns and attention to information privacy and ethics . With the clearer tension between developing and using technology, the enhancement of information literacy education for all people, especially informaticists' knowledge and skills of information ethics, has received increasing research and practical attention from the industrial and education sectors (Stahl et al., 2016;Eskens, 2020;Fiesler et al., 2020;Stark et al., 2020;Wu et al., 2020). ...
... These codes of ethics demonstrated the discipline-specific values and responsibilities of the information professionals and the institutions to society, as library-related associations accentuated issues with providing information services (LAROC, 2002;IFLA, 2012;ALA, 2021), and computer societies highlighted concerns about producing information (ASIS&T Professional Guidelines, 1992;ACM, 2018;IEEE, 2020). On the other hand, these professional conduct guidelines also shared the common inclusion of the four ethical issues of privacy, accuracy, property, and accessibility (Mason, 1986). Nevertheless, due to the highly contextual nature of information ethics (Nissenbaum, 2019), a number of studies have also shown that understanding of the codes is not translated into field practices (Fallis, 2007;McNamara et al., 2018;Saltz et al., 2019). ...
... This study designed a story-driven, first-person role-playing game based on the four major issues of privacy, accuracy, property, and accessibility (PAPA) (Mason, 1986), which are commonly emphasized in the professional codes of ethics in information and computer science (IFLA, 2012;ACM, 2018;IEEE, 2020;ALA, 2021). The story of the game began with an IoT company suffering serious consequences for violating information ethics, and through the game, the player was able to go back in time to relearn the associated concepts and perspectives of information ethics, revisit the environment, and remake the decisions. ...
Article
Full-text available
Taking advantage of the nature of games to deal with conflicting desires through contextual practices, this study illustrated the formal process of designing a situated serious game to facilitate learning of information ethics, a subject that heavily involves decision making, dilemmas, and conflicts between personal, institutional, and social desires. A simulation game with four mission scenarios covering critical issues of privacy, accuracy, property, and accessibility was developed as a situated, authentic and autonomous learning environment. The player-learners were 40 college students majoring in information science and computer science as pre-service informaticists. In this study, they played the game and their game experiences and decision-making processes were recorded and analyzed. The results suggested that the participants’ knowledge of information ethics was significantly improved after playing the serious game. From the qualitative analysis of their behavioral features, including paths, time spans, and access to different materials, the results supported that the game designed in this study was helpful in improving participants’ understanding, analysis, synthesis, and evaluation of information ethics issues, as well as their judgments. These findings have implications for developing curricula and instructions in information ethics education.
... The quality of data determines the quality of the research findings, teaching, business decisions, and policies. Ultimately, it also affects human lives (Mason, 1986;Stvilia et al., 2007). To support data-driven innovation, knowledge creation, and policy making, data need to be reusable. ...
... Furthermore, the community discussed the ethical and legal aspects of data extraction. It has long been known in the literature that the safe sharing of individual datasets does not necessarily mean that the sharing of their aggregation is ethical or legal also (Mason, 1986). These findings remind us that information technology and data science education programs need to train students not just in methods of data extraction and aggregation, but also in how to conduct those actions in an ethical and legal way. ...
... It is unlikely that you'd be singled out for aggregating a dump that's already out there. (t26) Unauthorized data use, aggregation, and sharing can lead to both ethical and legal problems (Mason, 1986; California Consumer Privacy Act ...
Preprint
Full-text available
This study examined discussions of the r/Datasets community on Reddit. It identified three activities in which the community engaged: question answering, data sharing, and community building. Members of the community used 21 types of data and information sources in their activities. The findings of this research enhance our understanding of the activity structures, data and information sources used, and challenges and problems encountered when users search for, share, and make sense of datasets on the web, outside the traditional information and data ecosystems. Data librarians and curators can use the findings of this study in the design of their data management and reference services. The typology of data sources and the metadata model developed through this study can be used in annotating and categorizing data sources and informing the design of descriptive metadata schemas and vocabularies for datasets.
... While not as intense as in computer science, discussions of digital ethics are also prevalent in the BISE community. While early contributions took an often broad approach (e.g., Mason 1986), much of the debate since has predominantly been associated with the design of information systems (e.g., Chatterjee et al. 2009;Floyd 1999;Maedche 2017;Mumford 1995;Stahl 2007). In practice, ethical guidelines and codes of conduct by professional associations -such as the GI, ACM, or IEEE -seek to inform both education and professional practice of ICT professionals. ...
... And, if so, is there a boundary between frameworks to establish a baseline (i.e., a checklist to ensure that no relevant domains or dimensions have been overlooked) on one side and specific norms and values on the other? While it is beyond the scope of this article to reason for one of these approaches over the other, the thus far mostly conceptual literature on norms and values has focused on suggesting a series of approaches that can help to govern digital technologies in practice -both outside of BISE (e.g., Brey 2012; Wright 2011) as well as within (e.g., Mason 1986;Stahl et al. 2017) -also because of a lack of explicit empirical studies on the matter yet. But as the empirically-oriented literature on the matter expands, this tension will become more salient. ...
... Privacy is often considered an individual "right" (Martin and Murphy 2017) and has been identified as one of the most important issues of the information age (Mason 1986;Nissenbaum 2004;Smith, Milberg, and Burke 1996). Privacy refers to "the ability of the individual to personally control information about one's self" (Stone et al. 1983, p. 460). ...
Article
Companies increasingly use personalization to offer a better experience to their customers. Online personalization enables them to learn from customers’ data and adapt their website content accordingly. Although customers may value personalization, it may also trigger privacy concerns. In this context, both regulators and firms need a better understanding of the process underlying the effect of personalization on privacy concerns, as well as the role of information transparency in this process. Drawing on signaling theory, the authors propose how perceived control may mediate the negative impact of personalization on privacy concerns and hypothesize that the interaction effect of personalization and information transparency depends on customer need for cognition. Findings from two experimental studies show that perceived control is lower on personalized websites than on nonpersonalized websites, which leads to privacy concerns. However, the presence of a transparency message can mitigate the negative effect of website personalization for customers who are in low need for cognition.
... Thereby, the relevance of ethical considerations and its practical implementation becomes more elaborated in recent studies (e.g., Benke et al., 2020;Myers & Venable, 2014). Mason (1986) firstly addressed ethical issues of the information age such as privacy, accuracy, property, and accessibility. More recent studies have reviewed existing ethical AI Principles and found that the missing considerations of ethics in IS are mainly due to the impaired linkage between the abstract ethical principles and the technical implementation (Hagendorff 2020) as well as the overall lack of implementation strategies (Jobin et al. 2019). ...
Conference Paper
Full-text available
Automated investing in form of Robo-Advice (RA) has promising qualities, e.g., mitigating personal biases through algorithms and enable financial advice for less wealthy clients. However, RA is criticized for its rudimentary personalization ability questioning its fiduciary duties, nontransparent recommendations and violations of data privacy and security. These ethical issues pose significant risks, especially for the less financially educated targeted clients, who could be exploited by RA as illustrated in the movie “Wolf of Wall Street”. Yet, a distinct ethical perspective on RA design is missing in literature. Based on scientific literature on RA and international standards and guidelines of ethical financial advice we derive eight meta-requirements and develop 15 design principles, that can guide more ethical and trustworthy RA design. We further evaluated and enhanced the design artifact through interviews with domain experts from science and practice. With our study we provide design knowledge that enables more ethical RA outcomes.
... These principles are aimed at maintaining the validity, objectivity, best practices and participant protection throughout a study (Zyphur & Pierides, 2017). Myers and Venable (2014) investigated the ethical principles used for design science research and propose the use of the principles suggested by Mason (1986), which are: ...
Thesis
Full-text available
The Fourth Industrial Revolution (4IR) is upon us and with it comes automation, which is the art of making machines perform the highly repetitive and non-complex tasks that people are faced with daily. The aim of this research is to identify how the difference of executing the same process between human execution and the execution of a digital process automation (DPA) solution can be evaluated, specifically in terms of cost, time and scope. Time, cost and scope make up the triple constraint model. The gathering of these constraints forms part of the process analysis phase, and the ability to accurately measure them once the automation is in place, allows the actual value of the automation to be realised.
... The studies of Mason [44] and Moor [45] were one of the first studies dealing with the information technologies' responsibility followed by studies focusing on ethics in emerging IT [13,14,17,29,51]. ...
Chapter
With the rapid proliferation of the use of artificial intelligence (AI) in organizations over the last decade, certain concerns arise regarding human rights, data security, privacy or other ethical issues that could be at stake due to the uncontrolled use of AI. However, concerns regarding transparency in the use of AI are not yet reflected in any standards for the disclosure of non-financial information, nor in current regulations. Voluntary disclosure of AI, being a novelty, is scarce, implies a lack of standardization and is limited above all to the financial, technology and telecommunications sectors. Therefore, the main objective of this paper is to seek consensus and to propose a set of relevant elements to structure the information on the use of AI by companies, to improve transparency, mitigate risks and demonstrate a real responsibility in its use. For the purposes of this study, a set of disclosure elements had been proposed based on multi stakeholder approach with the collaboration between the New Technologies Commission of AECA and the BIDA Observatory. The final proposal has been validated by online questionnaires and includes a guide to the general information elements (AI governance model; Ethics and responsibility; Strategy) as well as more specific disclosure requirements for each medium–high risk automated decision making (ADM) systems. Thus, this research attempts to contextualize the development of artificial intelligence reporting standards.
Article
Full-text available
THE FIRST TOPIC Scientific Methodology of Research First: Preamble: Most of the banks and financial institutions, especially the private ones, suffer from environmental complexity, which in turn increases the environmental uncertainty and the lack of accurate information that serves the banks in their work. Therefore, the scientific methodology of the research represented a roadmap for each researcher, as it determines the path that the research should be on. Therefore, this topic dealt with the research problem, identifying the desired goals, and identifying the limits of research related to spatial and temporal limits, society and the research sample, as well as the statistical tools used in collecting data and information. Second: the Research Problem The banking business environment today is facing a wave of rapid changes and developments, due to the tremendous progress in the means of communication and the development of information technology. Intense competition at the local and global levels, where it faces many challenges that necessitate the use of modern means, mechanisms and strategies to reduce information leakage and preserve it in order to keep pace with global and contemporary changes produced by globalization. Therefore, the problem is represented in the weak awareness of the senior leaders in the banks, the research sample (the Middle East Bank for Investment and Finance, the United Bank for Investment and Finance, and the Bank of Baghdad for Investment and Finance) in the role of the ethics of banking information systems in achieving banking excellence, which represent the important and basic elements in achieving banking leadership. The two variables are the ethics of information systems and banking excellence, and for the purpose of framing the mentioned problem, the research problem must be formulated with the following question: ((Is there a role for information systems ethics in achieving banking excellence, the research sample?)) Abstract: The research aims to shed light on the ethics of information systems and their role in achieving banking excellence for a sample of private banks in the province of Baghdad. It is important to focus on studying the ethics of banking information systems, which has become one of the most important basic and strategic resources that banks rely on to achieve outstanding performance. Achieving banking leadership in the Iraqi banking market. The researchers adopted the descriptive analytical approach to the research, and the questionnaire was considered as a main tool for collecting information in addition to personal interviews. The research reached the most important results that there is an acceptable correlation relationship between the ethics of information systems and banking excellence. There is also an effect of information systems ethics on banking excellence.
Chapter
The digitalisation of industry is an urgent aspect to develop regional economic systems, which involves introducing state-of-the art information and communication technologies into the activities of enterprises. Digital transformation is accompanied by a corresponding transformation of the structure of interregional interactions, influencing the depth and degree of regional differentiation, as well as the competitiveness of individual constituent entities of the Russian Federation. In this article, the authors assess the impact of digitalisation on the development of regions of the Russian Federation by determining the depth of digital transformation and the unevenness of digitalisation processes among the constituent entities of the Russian Federation. The purpose of the research is to study the digital transformation processes in a territorial context taking into account its impact on territorial differentiation and the cyclical nature of economic development. As a result of the research, the authors determined that in most constituent entities of the Russian Federation, the second stage of digital transformation is being implemented, while the country’s territory is highly differentiated in terms of digitalisation performance. The authors have proposed several possible scenarios for the implementation of digital transformation processes in Russian regions.
Collins Distinguished Professor of Management Information Sciences Edwin L
  • P Carr
Carr P. Collins Distinguished Professor of Management Information Sciences Edwin L. Cox School of Business Southern Methodist University Dallas, TX
The Computer and Invasion of Privacy
  • U S House Of Representatives
U.S. House of Representatives, The Computer and Invasion of Privacy, U.S. Government Printing Office, Washington, D.C., 1966.