Steve T. McKinlay’s research while affiliated with Wellington Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


Evidence, Explanation and Predictive Data Modelling
  • Article
  • Publisher preview available

December 2017

·

50 Reads

·

9 Citations

Philosophy & Technology

Steve T. Mckinlay

Predictive risk modelling is a computational method used to generate probabilities correlating events. The output of such systems is typically represented by a statistical score derived from various related and often arbitrary datasets. In many cases, the information generated by such systems is treated as a form of evidence to justify further action. This paper examines the nature of the information generated by such systems and compares it with more orthodox notions of evidence found in epistemology. The paper focuses on a specific example to illustrate the issues: The New Zealand Government has proposed implementing a predictive risk modelling system which purportedly identifies children at risk of a maltreatment event before the age of five. Timothy Williamson’s (2002) conception of epistemology places a requirement on knowledge that it be explanatory. Furthermore, Williamson argues that knowledge is equivalent to evidence. This approach is compared to the claim that the output of such computational systems constitutes evidence. While there may be some utility in using predictive risk modelling systems, I argue, since an explanatory account of the output of such algorithms that meets Williamson’s requirements cannot be given, doubt is cast upon the resulting statistical scores as constituting evidence on generally accepted epistemic grounds. The algorithms employed in such systems are geared towards identifying patterns which turn out to be good correlations. However, rather than providing information about specific individuals and their exposure to risk, a more valid explanation of a high probability score is that the particular variables related to incidents of maltreatment are just higher amongst certain subgroups in a population than they are amongst others. The paper concludes that any justification of the information generated by such systems is generalised and pragmatic at best and the application of this information to individual cases raises various ethical issues.

View access options

The Floridian Notion of the Information Object

May 2012

·

7 Reads

·

1 Citation

In this essay I consider Luciano Floridi’s use of Object Oriented (OO) terminology and theory in explaining his concept of the information object. I argue for several reasons that even if we admit Floridian information objects into our ontology they cannot be much like OO objects. OO objects, I argue, are referents and as such have explicit identity relations across various levels of abstraction from conceptual design through to implementation on computer hard drives or the like. Further, OO objects are clearly artifactual or human-made entities, always instantiated explicitly via the methods associated with their corresponding and defining object classes. Information objects on the other hand, if we are to consider them ontologically primitive, cannot be referents and certainly cannot be artifacts.


What are we Modeling when we Model Knowledge?

3 Reads

This paper examines the relationships between natural language and our attempts to codify and manage business knowledge in particular the normative statements that spawn all business decisions. I argue that such knowledge is inextricably bound to language and experience and that our conceptualizations, which ultimately constitute knowledge, are inseparable from language. Furthermore if we wish to develop a model of knowledge that can be represented computationally we need to understand the extent to which language can be mapped to its empirical content and or to sentences, which correlate with equivalent sentences. I argue that under current computational frameworks such a project is doomed due to inherent indeterminacies in terms and the concepts those terms seek to denote. I conclude thus that knowledge cannot be logically and systematically represented under our current and traditional computational frameworks 1 .

Citations (1)


... The same can be said for attempts to nudge users or predict their behavior: predictions are statistical extrapolations of existing data sets and manifest no more than the ability of a data-generating system to replicate itself. For an analysis of bias in the predictive use of data, seeBrantingham (2018); for a critical epistemological analysis, seeMckinlay (2017).13 One can point here again at the fact that internet never forgets. ...

Reference:

Alienation in a World of Data. Toward a Materialist Interpretation of Digital Information Technologies
Evidence, Explanation and Predictive Data Modelling

Philosophy & Technology