Data mining, machine learning and other disciplines involved in finding patterns of data promise a future with new insights that will enable a new mode of intelligence. However, as with much other technological marketing, this is also a myth. In our interface criticism, we propose to engage with ubiquity, openness, participation and other aspects of this intelligence as mythological constructions which are presented to us via interfaces.
Following on from Roland Barthes‘ seminal studies of visual culture, where he discusses everything from striptease to washing powder, we intend to engage with the illusions of technologies. In many ways it is, for instance, an illusion to believe that a computer system can really forecast everything. As with weather forecasts, predictions of traffic, browsing, and other behaviours are faulty. Machine learning works by approximation and by generating generalized functions of behaviour, which are only generalizations after all; and similarly, the data we produce is captured by technologies that constantly have to deal with the noise of many simultaneous and ambiguous actions. However, from the perspective of a mythology, the important aspect is not whether the generated algorithms work or not, but how they become part of our reality. For instance, they function as speech acts that create correlations between ‘data analytics’ and ‘intelligence’, and this performative act may have a real impact when we rely on this alleged intelligence – when we market products, control traffic, fight terrorism or predict climate changes.
The mythologization of technology that takes place in the speech acts does not imply that how the technology ‘really works’ is hidden, but merely the ability to automatically associate certain images with certain signification in an absolute manner. To follow on from Roland Barthes, the mythologization of our smart technologies removes the history of intelligent systems, smartness, ubiquitousness, openness, and so forth, from the linguistic act. Just as we do not question that Einstein’s famous equation, and equations more generally, are keys to knowledge – as Barthes describes – intelligent systems for smart cities, state security, logistics, and so on suddenly appear absolute.1 Along with openness, participation and other techno myths, ‘smartness’ appears as an algorithmic reality we cannot question.
However, all techno myths should be seen as expressions of how we want the world to be, rather than what it really is. In order to perform an interface criticism, we do not need to discuss if the technologies are true or false – for the smart techniques of data mining, machine learning, and so forth, obviously work – but we need to realize that their myths are also part of our reality. As Philip Agre has noted, we subject our actions to the system that needs to capture them as data; and this deeply affects the way we produce, socialize, participate, engage, and so on.2 The monitoring of academic production and the capture of citations is, for instance, used to create indexes which indicate impact. Ideally, this can affect the efficiency of academia and be a relevant parameter for funding opportunities, careers, and the like. Even though this efficiency may be absent, the data capture still has an effect on the perception and performance of academic work; it is constitutive of our habitat and subtly affects our habits.
In many ways, the technological myths always feel real, and are dominant actors that affect a range of areas – from the perception of the weather, to our cities, and our cultural production and consumption. We have every reason to question not only if the technology works, but also the implications of its myths. It is often when we realize the pointlessness of our actions (that texts can be quoted for their mistakes, rather than their insights; or their summaries of knowledge rather than their epochal value) that we structurally begin to question the absolute assertions about the world embedded in the myth, and also to envision alternatives.
In this article, we do not want to dismiss intelligent, open, participatory or other technologies, but to discuss how technologies participate in the construction of myths. To us, this criticism fundamentally involves a mythology – a critical perspective on the interface that explores how the interface performs as a form of algorithmic writing technology that supposedly transcends signs, culture and ideology. To focus on the interface as a a language diverts attention away from technology’s immediate assertions about reality – the technical fix – and highlights the materiality of their staging. The aim will be to discuss how technologies perform as dreams of emancipatory or other post-semiotic idealized futures, and argue for the need for an interface mythology that critically addresses the technologies as myths; and unravels them as value systems and tools for writing – of both future functionalities and future cultures.
There is a general tendency to develop technology in the light of cultural utopias. The development of hypertext is a very good example of this. With the emergence of hypertext in the sixties (and later the WWW, weblogs, social media, and much more), the development of various forms of textual networks has been intrinsically linked to strong visions of new ways of producing, experiencing and sharing text. One of the strongest proponents of such visions has been Theodor H. (Ted) Nelson.
Nelson’s Xanadu is a lifelong project, and it has been the outset for numerous reflections on the development of hypertext. Perhaps the most well-known of these texts is Computer Lib/Dream Machines from 1974, a self-published book featuring illustrations, cartoons and essays on various topics, all aiming in different ways to explore alternative ways of thinking related to computers.
Furthermore, the book can be read from both ends. The one end offers a technical explanation for common people of how computers work; as Nelson writes: “Any nitwit can understand computers, and many do. Unfortunately, due to ridiculous historical circumstances, computers have been a mystery to most of the world.”3 The other end is meant to make the reader see the development of the computer as a “choice of dreams.”4 According to Nelson, what prevents us from dreaming is the developer’s incomprehensible language (or, as he labels it, “cybercrud”), which in his view is just an excuse to make people do things in a particular way; that is, to let the technocratic visions of culture stand unchallenged.
Already in 1965 Nelson invented the term hypertext for a new kind of file structure for cultural and personal use:
"The kinds of file structures required if we are to sue the computer for personal files and as an adjunct to creativity are wholly different in character from those customary in business and scientific data processing. They need to provide the capacity for intricate and idiosyncratic arrangements, total modifiability, undecided alternatives, and thorough internal documentation." (...) "My intent was not merely to computerize these tasks but to think out (and eventually program) the dream file: the file system that would have every feature a novelist or absentminded professor could want..."5
In this way, Nelson was already in 1965 aware that developing alternative uses of the computer was closely linked to developing alternative versions of the technical structure and even the file system. He continued – and still continues – to develop his idea of hypertext, of which he premiered the first publicly accessible version at the Software exhibition of technological and conceptual art in New York in 1970. Visions and dreams appear in a recognition that the power of computation – or of computer liberation – is linked to visions of a new medium; that the inner signals of cathode ray tubes are related to signs and signification, and therefore to cultural visions. In other words, they are linked to the hypothesis that the computer interface, at all levels, and not just the graphical user interface, is an interface between the technical and the cultural. When text, for instance, is treated by protocols there is a double effect, where not only the cultural form of the text changes (e.g. from book to hypertext), but also the technology itself appears as a deposition of cultural values. This is why the discussion of the future of text and images, on the web and in e-books, also appears as a discussion of text protocols and formats.
The subsumption of dreams
Many writers and theorists have adopted Nelson’s visions of alternatives, and of new modes of producing, reading and sharing text. For example, in his book Writing Space, Jay Bolter explored what writing was before and potentially could be with hypertext.6 Bolter’s main hypothesis was that print text no longer would decide the presentation and organisation of text, and that it no longer would decide the production of knowledge. Readers would become writers, and this would undermine the authority of print text; writing would become liquid, and we would experience a space of creative and collective freedom. However, as we have experienced on today’s Internet, not everything seems as rosy. There are plenty of reasons to look more critically at Facebook, Twitter, Wikis and other services.
Nelson’s Xanadu system had already included an advanced management instrument, the so-called ‘silver stands’: stations where users can open accounts, dial up and access the information of the system, process publications and handle micro payments. Nelson himself compares this to a McDonald’s franchise and the Silver Stands somehow resemble the Internet Cafés of the late 90s and early 2000s or the commercial, centralized platforms of Web 2.0. Furthermore, copying content in the Xanadu system is restricted to dynamic “transclusions” that include the current version of the original text and assure a small royalty when accessed, a so-called “transcopyright”.
When looking at the services of Facebook, Google, Amazon, Apple, and so on today, it is similarly obvious that the common production modes characteristic of a free writing space are accompanied by strict control mechanisms. There are, for instance, strict protocols for the sharing, searching, writing and reading of text, and these protocols often ensure an accumulation of capital and compromise the anonymity and freedom of the participant. In other words, the instrumentalization of the dream includes everything else but the dream. The envisioned shared, distributed, free and anonymous writing space is in fact a capitalised and monitored client-server relation.
This critique of contemporary interface culture is perhaps not news, but what we want to stress here is the effect of the instrumentalization of dreams and visions. What this indicates is that down the ‘reactionary path’ (that is, the path of instrumentalization), our dreams turn into myths. However, the ethos of the dreams remains, and become automatically associated with the technical systems.
The three phases of media technologies
The dream of a shared writing space, a Xanadu, that overcomes the problems of representation facing linear text forms, as well as the hypertext system’s instrumentalization of this dream, the mythological status of such systems, and the adherent critique of them, all fit into a three-phase model of media presented by the German media theorist Harmut Winkler.
From a linguistic perspective all new media are, in the first phase, considered post-symbolic, concrete and iconic communication systems that present a solution to the problem of representation, or the arbitrariness of the sign. Winkler even sees the development of media as “deeply rooted in a repulsion against arbitrariness”, and a “long line of attempts to find a technical solution to the arbitrariness” dating back to the visual technical media of the 19th century.7 In addition, hypertext was perceived as establishing a more true relation between form and content, because of its more intuitive, democratic, and less hierarchical, nonlinear structure. It will often be the investment in the dreams that pays for their technical implementation: You not only buy new functionality, you buy a new way of living, working, thinking and dreaming. In this way, the development of hypertext, the WWW, social media – and also computer games and virtual reality, and their alleged liberation of the user – is driven by an urge to fulfil a dream, a vision of a new future.
In the second phase, the utopias become natural, stable and hegemonic. Through subsumption by market forces they become commodified, and sold as myths of being part of a media revolution. However, the subscription to this reality also contains an explicit lack of visions of alternative futures, and is therefore also without the critical, activist and heroic dimensions of the first phase.
It is, however, also a phase where people begin to study the media and learn how to read and write with them. In other words, the new media begins to enter a phase where you see it as a language, and hence where the arbitrariness of the sign is reinstalled. In the third phase, this arbitrariness has turned into disillusion over the media’s lack of abilities; which, however, also constitutes the ground for new visions, new media technologies, new interfaces, and new media revolutions.
The question is how far are we, today, from Ted Nelson’s critique of centralised data processing and IBM-like visions of efficiency and intelligence? In several ways, it seems as if we are in a phase where we might soon begin to regard big data, smart systems, social intelligence, and so forth, as a language; where we begin to see through the technological systems’ mythological statuses, or at least their dark sides in the form of control and surveillance. This is by no means an easy phase. As Ted Nelson also noted, “Most people don’t dream of what’s going to hit the fan. And computer and electronics people are like generals preparing for the last war.”8 The developers of technology and their supporters will often insist that their system is the future, and that the users’ actions need to follow the system’s intrinsic logic.
From a design perspective, the assumption will typically be that the clearer the representation of the computer signal-processes appears (or the mapping of mental and symbolic labour – the formalization of labour to computer language performed by the programmer), the more user-friendly and understandable the user interface appears. To computer semiotics, the aim was ultimately to create better interface design. However, in relation to an interface criticism, it is noteworthy how computer semiotics also explains how a design process in itself contributes to the mythological status of the interface – its absolute assertions about the world.9 In other words, the myths of interfaces are not only established through how they are represented elsewhere (how they are talked about, written about, advertised, etc.), but also through the interfaces themselves, and how they are designed. It is in its design as a medium, and in its claims of an iconic status as a communication system, that we find the interface’s operationalized mythology. And, in a general perspective, this is not unlike how media such as photography, film, the panorama, and so on, according to Harmut Winkler, have tried to operate in earlier times.
To read this myth demands that one begins to read the media – or, in our case, the interface. It is a tool for reading and writing, and not an absolute representation of the world. We must, therefore, begin to pay attention to the establishment of sign-signal relations that take place in the interface design, as a particular production mode, a particular kind of labour; a production of signs that at once reflects cultural and historical processes, and leaves an imprint on the world and how we organise and deal with it.
For instance, the software of the print industry, as Nelson also demonstrates, both reflects the historical and cultural origins of print and negotiates the reality of text, as searchable, sequential, iterative, sortable, and so forth. Our file formats and standards for storing and showing data also reflect such processes. Jonathan Sterne, for instance, has recently analysed how the diameter of the Compact Disc directly reflects relations to the cassette tape, and how the mp3 format also holds an audio culture of listening that is embedded in the sound compression, and how this directly challenges the conception of technological progress as equal to increased high fidelity.10 Even the electrical circuits and the signal processes deep inside the computer can be viewed as the result of language acts, as Wendy Chun has pointed out.11
Computer software and its formats and platforms promise us dreams of the future, of technological progression, better opportunities to make our music portable and shareable, better ways of organising our work, and so forth. It is often these dreams that carry the technological development. However, the dreams have a tendency to freeze, and gain an air of absoluteness, and of hegemony. This happens through their commodification and appropriation to a reality of power and control. Technology is marketed as a utopia of being in the midst of a media revolution. But in this phase the cultural and historical residues are hidden. We are seduced by the interface into neglecting the work behind it, and the operationalization and instrumentalization of dreams that takes place. The interface appears mythical, absolute and frozen. We do not see the mp3 format’s compression of sound as a result of an audio culture, but as the only possible scenario, a technological fact; and we do not see the IT systems of workers as the result of a negotiation of labour processes, and we do not see the operational system’s metaphorization of actions as other than a result of natural selection in the evolution of technologies. To get out of the deception of the technological facts we need interface mythologies – critical readings of the interface myths.
Agre, Philip E. "Surveillance and Capture: Two Models of Privacy." In The New Media Reader, edited by Noah Wardrip-Fruin & Nick Montfort 737-60. Cambridge, Massachusetts & London, England: The MIT Press, 2003.
Barthes, Roland, and Annette Lavers. Mythologies: Selected and Transl. From the French by Annette Lavers. New York: Hill and Wang, a division of Farrar, Straus & Giroux, 1972.
Bolter, J. David. Writing Space the Computer, Hypertext, and the History of Writing. Hillsdale, N.J: L. Erlbaum Associates, 1991.
Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. Software Studies. Edited by Lev Manovich Matthew Fuller, and Noah Wardrip-Fruin Cambridge, Massachusetts; London, England: MIT Press, 2011.
Nelson, Theodor H. "Computer Lib / Dream Machines." In The New Media Reader, edited by Nick Montfort and Noah Wardrip-Fruin 301-38. Cambridge, Mass.: MIT Press, 2003 (1974/1987).
———. "A File Structure for the Complex, the Changing, and the Indeterminate." In The New Media Reader, edited by Nick Montfort and Noah Wardrip-Fruin 133-45. Cambridge, Mass.: MIT Press, 2003 (1965).
Pold, Søren, and Christian Ulrik Andersen. The Metainterface: The Art of Platforms, Cities and Clouds. Cambridge, Massachusetts. London, England: MIT Press, 2018.
Sterne, Jonathan. Mp3: The Meaning of a Format. Sign, Storage, Transmission. Durham: Duke University Press, 2012.
1 Roland Barthes and Annette Lavers, Mythologies: Selected and Transl. From the French by Annette Lavers (New York: Hill and Wang, a division of Farrar, Straus & Giroux, 1972).
2 Philip E. Agre, "Surveillance and Capture: Two Models of Privacy," in The New Media Reader, ed. Noah Wardrip-Fruin & Nick Montfort (Cambridge, Massachusetts & London, England: The MIT Press, 2003). According to Agre there are two dominant notions of surveillance. Surveillance is often perceived in visual metaphors (i.e., ‘Big Brother is watching’); however, computer science mostly builds on a tradition of capturing data in real time, and is often perceived in linguistic metaphors (‘association’, ‘correlation’, etc.). Hence these metaphors are also better suited to describe the kinds of surveillance taking place when data capture permeates social life, friendship, creative production, logistics, and other areas of life.
3 Theodor H. Nelson, "Computer Lib / Dream Machines," in The New Media Reader, ed. Nick Montfort & Noah Wardrip-Fruin (Cambridge, Mass.: MIT Press, 2003 (1974/1987)), 302.
5 Theodor H. Nelson, "A File Structure for the Complex, the Changing, and the Indeterminate," in The New Media Reader, ed. Nick Montfort & Noah Wardrip-Fruin (Cambridge, Mass.: MIT Press, 2003 (1965)), 134.
6 J. David Bolter, Writing Space the Computer, Hypertext, and the History of Writing (Hillsdale, N.J: L. Erlbaum Associates, 1991).
8 Theodor H. Nelson, "Computer Lib / Dream Machines," in The New Media Reader, ed. Nick Montfort & Noah Wardrip-Fruin (Cambridge, Mass.: MIT Press, 2003 (1974/1987)), 305.
9 On computer semiotics and the work of Frieder Nake and Peter Bøgh Andersen, see Søren Pold and Christian Ulrik Andersen, The Metainterface: The Art of Platforms, Cities and Clouds (Cambridge, Massachusetts. London, England: MIT Press, 2018).
10 Jonathan Sterne, Mp3: The Meaning of a Format.Sign, Storage, Transmission (Durham: Duke University Press, 2012).
11 Wendy Hui Kyong Chun, Programmed Visions: Software and Memory, ed. Lev Manovich Matthew Fuller, and Noah Wardrip-Fruin, Software Studies (Cambridge, Massachusetts; London, England: MIT Press, 2011).
APPLIED METAPHYSICS – OBJECTS IN OBJECT-ORIENTED ONTOLOGY AND OBJECT-ORIENTED PROGRAMMING
Ontology After Informatics
“What can I know? What must I do? What may I hope for? What is man?”[note]Immanuel Kant, Critique of Pure Reason, ed. Paul Guyer and Allen W. Wood, The Cambridge Edition of the Works of Immanuel Kant (Cambridge: Cambridge University Press, 1998), A805/B833.[/note] The four Kantian questions, as universal as they seem, pivot around the I. All knowledge gained is knowledge only in the cognitive relation between acts of consciousness and an outside world, which is deemed more or less inaccessible. Every ethical demand is demanded of an I. Every hope experienced is experienced by an I. Kant holds that answering these three questions will inevitably lead to an answer of the fourth: What is man? And it is again an I who questions what it is. The Western world lives in the Kantian horizon. It pivots around the I. Speculative realists set out to change that. While not representing a unified theory, this line of thought encompasses different non-anthropocentric positions striving to, in Ray Brassier’s words, “re-interrogate or to open up a whole set of philosophical problems that were taken to have been definitively settled by Kant, certainly, at least, by those working within the continental tradition.”[note]Ray Brassier, Iain Hamilton Grant, Graham Harman, and Quentin Meillassoux, “Speculative Realism,” in Collapse, ed. Robin Mackay, vol. III (Oxford: Urbanomic, 2007), 308.[/note] As overcoming the human as the epistemic center of the cosmos necessarily leads to both a speculative stance and a more or less realist position, speculative realism is a feasible term. In accordance with the tradition in which Kant named metaphysics “a wholly isolated speculative cognition of reason,”[note]Kant, CPR, B xiv.[/note] speculative realism merely makes the nature of its task obvious by naming it accordingly. The variant of speculative realism which will be looked into here, is object-oriented philosophy (more often referred to as object-oriented ontology and thus abbreviated ooo), a theory by contemporary American philosopher Graham Harman, who also coined the term. Even though ooo is subsumed under the speculative realism movement, Harman claims to be “the only realist in speculative realism.”[note]Graham Harman, personal communication with the author, March 12, 2017.[/note] ooo, even though this is most likely unintended, is a substance ontology developed under the impression of informatics. It “might be termed the first computational medium-based philosophy, even if it is not fully reflexive of its own historical context in its self-understanding of the computation milieu in which it resides.”[note]David M. Berry, Critical Theory and the Digital, Critical Theory and Contemporary Society (New York: Bloomsbury, 2014), 103.[/note] As “perhaps the first Internet or born-digital philosophy has certain overdetermined characteristics that reflect the medium within which [it has] emerged.”[note]Ibid., 104.[/note] Such notions usually refer to the leading figures of speculative realism using blogs and social media to distribute their thoughts quickly and engage in lively discussions with the academic community online. ooo however has a deeper relation to the computational sphere: while Harman first publicly mentioned the term object-oriented philosophy in 1999,[note]Graham Harman, Bells and Whistles: More Speculative Realism (Winchester: Zero Books, 2013), 6.[/note] object-oriented programming was already invented in the late 1960s – and the parallels between these two domains are noteworthy. Working at the Norwegian Computing Center in Oslo, Ole-Johan Dahl und Kristen Nygaard in the 1960s conceived a new way of computer programming, in which what was separate before, namely data and functions, were molded into combined and somehow sealed logical units. Dahl and Nygaard named these units “objects” and the programming language they developed, Simula 67, is regarded the first to allow for software development following the paradigm of object-oriented programming (oop).[note]Bjarne Stroustrup in: Federico Biancuzzi and Shane Warden, eds., Masterminds of Programming (Sebastopol, CA: O’Reilly, 2009), 10.[/note] oop has been in use for nearly five decades now and while it is still a popular way of structuring software development projects large and small today, its critics have become more vocal. oop’s unnecessary complexity is just one of the issues computer language designers bring up: “The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.”[note]Joe Armstrong, Coders at Work: Reflections on the Craft of Programming, ed. Peter Seibel (New York: Apress, 2009), 213.[/note] Regardless of oop coming under fire lately, the striking parallels between the aesthetic and technological praxis of object-oriented programming on the one side and a new metaphysics on the other side, promise a fruitful contribution to the ontographic project. As a science investigating “the structure and properties (not specific content) of scientific information, as well as the regularities of scientific information activity, its theory, history, methodology and organization,” informatics was defined in the 1960s.[note] A.I. Mikhailov, A.I. Chernyl, and R.S. Gilyarevskii, “Informatika – Novoe Nazvanie Teorii Naučnoj Informacii,” Naučno Tehničeskaja Informacija, no. 12 (1966): 35–39.[/note] Since then the task of informatics has been extended beyond the analysis of scientific information and deepened by performing this task using the means of computing. Thus, informatics today has become the science the investigates the structure and properties of information. The similarities between object-oriented programming and object-oriented ontology do not come as a surprise, given that informatics is traditionally occupied with metaphysics: both computer science and philosophy “do not address the materiality of things such as physics, they are not confined to the ‘science of quantity’ (= mathematics).”[note]Alessandro Bellini, “Is Metaphysics Relevant to Computer Science?,” Mathema, June 30, 2012, http://www.mathema.com/philosophy/metafisica/is-metaphysics-relevant-to-computer-science/.[/note] Since computer science strives to map reality onto computational structures, employing substance ontologies seems obvious. As computer science works on domain-specific models in order to find solutions to practical problems, employing models of the world, informatics is – like any proper science – applied metaphysics.
“Computational metaphors share a lot of similarity in object-oriented software to the principles expressed by [ooo’s] speculations about objects as objects.”[note]Berry, Critical Theory and the Digital, 205.[/note] There are astonishing parallels between object-oriented ontology and object-oriented programming, even though the former only borrowed the name from the latter.[note]Graham Harman, personal communication with the author, August 18, 2013.[/note] When object-oriented programming was invented, the dominant approach to computer programming was imperative or procedural. Imperative programming means conveying computational statements that directly alter the state of the program. A program designed in this way roughly works by linearly processing a list of functions step by step. When these statements are grouped into semantic units, “procedures,” one can speak of procedural programming. Procedures are used to group commands in a computer program in order to make large programs more easily maintainable. Groups of statements also make code reusable, since the same set of statements can be invoked again and again. It also makes code more flexible, since parameters can be handed to a procedure for it to process. Parameters can be thought of as values handed to functions (the x in f(x)). While the function follows the same logics, the operation’s result depends on the parameters passed. These improvements however were not sufficient to handle complex computational tasks like weather forecasts. Tasks like this require simulations. And even though Alan Shapiro mockingly notes that “the commercialized culture of the USA is substantially not a real world anymore: it is already a simulation. Object-oriented programming is a simulation of the simulation,”[note]Alan Shapiro, Die Software der Zukunft oder: das Modell geht der Realität voraus, International Flusser lectures (Köln: König, 2014), 7; translation by the author[/note] the necessity of simulating weather systems or financial markets called for more sophisticated strategies to structure computer programs. Instead of grouping lists of statements into procedures and have these statements directly manipulate a program’s state, object-oriented programming offers a vicarious approach. Computational statements and data are being bundled together in objects. These objects are being closed off to the rest of the program and can only be accessed indirectly by means of defined interfaces. Under this new programming paradigm computer programmers became object designers – they were forced to come up with an object-oriented ontology for the world they wanted to map into the computer’s memory. The invention of object-orientation made object-oriented computer languages a necessity. The available computer languages did not possess the grammar necessary to describe objects and their relations. It becomes clear that “computer language” or “programming language” are misleading terms. These languages are products of human invention. They are human-designed, human-understandable languages, which computers can process in order to fulfill certain tasks. Designing a programming language is an attempt at producing the toolset for future developers to solve as yet unanticipated problems, sometimes in ways that were previously inconceivable. Object-oriented ontologies in informatics are pragmatic and open, they are realist in a sense of being a useful system of denotators of things outside the computer (or the programming language). They aim for reusable program code, which only needs to be written once, so problems do not need to be solved twice and errors do not have to be fixed in multiple places. Thus, the programming language designer’s task is meta-pragmatic: designing a language as a tool for others to build tools to eventually fulfill certain tasks. Object-orientation discards lists of statements in favor of objects as the locus of, to use a Simondonian term, “problem solving.” Simondon’s notion of the individual describes objects as “agents of compatibilisation,” solving problems between different “orders of magnitude.”[note]Gilbert Simondon, “The Genesis of the Individual,” in Incorporations, ed. Jonathan Crary and Sanford Kwinter (New York: Zone, 1992), 301.[/note] With this notion Simondon seems to have anticipated the object in object-oriented programming; or at the very least, the actual implementation of objects in oop prove to be in line with the traits of the individual Simondon described. Object-oriented programming became so widely adopted partly because it is close to the everyday experience of objects. It also makes strong use of hierarchies, another everyday concept. Objects may remain identifiable and stable from the outside, even when their interior changes dramatically. The “open/closed principle” is evidence of this: a component, not necessarily an object, needs to be open for future enhancement, but closed with regards to its already exposed interfaces. This “being closed” ensures that other components depending on the component can rely on the component’s functionality displayed earlier – unexpected changes in behavior need to be prevented.[note]Bertrand Meyer, Object-Oriented Software Construction, Prentice-Hall International Series in Computer Science (New York: Prentice-Hall, 1988), 23.[/note] Being closed can be read as unity, as a certain stability of an object that makes it identifiable. Object-oriented programming however reaches some of this stability by interweaving objects into a hierarchy, an idea that object-oriented ontology rejects. In both object-oriented programming and object-oriented ontology objects are the dominant structural elements. In object-oriented programming, objects are supposed to be modeled after real-life objects as the aim is to provide a sufficiently precise representation of the reality to be simulated. In practice this undertaking often fails. Objects are being created in code for things that do not exist outside the program. Functionality is forced into object form even when the result is awkward and unsatisfying. As a result, alternative programming paradigms are getting more interest lately and new programming languages like Apple’s Swift are designed undogmatically, mixing different paradigms with the goal to always deliver the solution that’s least error-prone for the use-case. But this should not be of any concern as we are focusing on the multitude of traits that oop and ooo share:
Objects are both systems’ basic building blocks.
Objects can be anything from very simple to extremely complex.
Objects have an inner life, which is not fully exposed to the outside.
Objects interact with other objects indirectly and do not exhaust other objects completely.
Objects can destroy other objects.
Results of interactions between objects may or may not be predictable from outside an object.
Objects can contain objects.
Objects can change over time, but at the same time stay the same object in the sense of an identifiable entity.
No two objects are the same.
Objects as Unpredictable Bundles
The first programming language regarded as object-oriented was Simula 67, invented in the 1960s by Ole-Johan Dahl und Kristen Nygaard at the Norwegian Computing Center in Oslo. Simula 67 was designed as a formal language to describe systems with the goal of simulation (thus the name Simula, a composite of simulation and language). Simula already incorporated most major concepts of object-orientation. Most importantly, Dahl’s and Nygaard’s object definition still holds today: objects in object-oriented programming are bundles of properties (data) and code (behavior, logics, functions, methods). These objects expose a defined set of interfaces, which does not reveal the totality of the object’s capabilities and controls the flow of information in and out of the object. These two specifics are subsumed under the “encapsulation” moniker.[note]Biancuzzi and Warden, Masterminds of Programming, 350.[/note] Objects in programming are another variant of “the ancient problem of the one and the many”:[note]Harman, The Quadruple Object, 69.[/note] they exist as abstract definitions, called “classes” or “object types,” and as actual entities, called “objects” or “instances.” So, while a class is the Platonic description of an abstract object’s properties and behavior, instances are the actual realization of such classes in a computer’s memory.[note]Vlad Tarko, “The Metaphysics of Object Oriented Programming,” May 28, 2006, http://news.softpedia.com/news/The-Metaphysics-of-Object-Oriented-Programming-24906.shtml.[/note] There can be more than one instance of any class, and it is possible and common for multiple instances of the same class to communicate with each other. Let us look at a concrete example of the difference between procedural and object-oriented programming. In procedural programming, a typical function would be y=f(x), where f is the function performed on x and the function’s result would be stored (returned) in the variable y. In object-orientation however, an object x would be introduced, which would contain a method f. An interface would be defined that would allow for other objects to call f, using a specified pattern. And so, by invoking f, the member function being part of object x – or x.f() for short – the object, containing both data and functionality, stays within itself. In our case, there is no return value, so no y to save the results of function f to. This is not necessary as the object itself holds all the data it operates on. Object-oriented programming has been criticized for the fact that the behavior of object methods (functions inside objects) is unpredictable when viewed from a strictly mathematical perspective. A mathematical function y=f(x) is supposed only to work on x and return the result in y. An object method however can also modify other variables inside its object and thus lead to unpredictable results. A function is supposed to return its result – an object method however modifies its object, but does not necessarily return a copy of (or a pointer to) the whole modified object. When manipulating an object through one of its member functions, it is not known from the outside which effects this manipulation will have on the object internally. This means the object’s behavior following such a method call is not predictable from outside of the object. While software developers generally try to prevent unpredictability, the object-oriented philosopher will hardly be surprised: it is a key characteristic of ooo that objects can behave in unpredictable ways and that their interiority is sealed off from any direct access: I think the biggest problem typically with object-oriented programming is that people do their object-oriented programming in a very imperative manner where objects encapsulate mutable state and you call methods or send messages to objects that cause them to modify themselves unbeknownst to other people that are referencing these objects. Now you end up with side effects that surprise you that you can’t analyze.[note]Biancuzzi and Warden, Masterminds of Programming, 315.[/note] While in object-orientation data and operations performed on it need to be bundled into one object, the competing paradigm of functional programming means that operations and data are separated. In the functional programming language Haskell for example, functions can only return values, but cannot change the state of a program (as is the case in object-orientation).
The Platonic Class
While objects may have complex inner workings (code as well as data), they usually do not share all this information with other objects. An object exposes certain well-defined interfaces through which communication is possible. In line with object-orientation’s original application, we want to discuss the key concepts of oop using a simulation program. We will imagine a program simulating gravitational effects in our solar system. Such a program, if designed in an object-oriented way, would most definitely contain an object type – or Platonic “class” – representing a planet. Such a class would contain variables to describe a planet’s physical and chemical properties like its diameter, atmosphere, age, current average temperature, its position in relation to the solar system’s sun, etc. It would also contain methods, which would be used to manipulate class data. A method to change the average temperature (to account for the case of a slowly dying sun for example) would need to be implemented as well. In a solar system simulation, there would be multiple instances – objects – of the planet class; in the case of our solar system one would create objects for Earth, Jupiter, Saturn etc. The simulation would manipulate any planet’s data by calling the object’s respective method, for example the one to change the planet’s average temperature on the surface. The actual variable holding the average temperature itself would not be exposed to the object’s outside. So, any interaction with the object must be mediated through the interface methods provided by the object. All interactions with an object become structured by this intermediate layer and can be checked for faulty inputs. Instead of directly changing the temperature on a planet to a value below absolute zero (which would be possible if direct access was given), the intermediate data setting method provides its own logic, and thus limitations, to prevent such a “misuse” of the object. But all planets are different and to take this into consideration in our simulation, we would need to set any instance’s properties (data) accordingly. To do so, classes provide special “constructor” methods, which bring an instance of a class into existence. Constructors take parameters needed to initially construct an object and then create an instance accordingly. (To destroy objects, so-called “destructors” can be used as well.) As mentioned, object-oriented programming differentiates between classes (object types) and objects.[note]There is other terminology, but in this work, we will use these classic terms as defined in the C++ programming language[/note] What makes this parallel interesting is that it is an interplay between a fixed structure and free-floating accidents that constitutes an object. This interplay is what ooo deems an object’s essence. As not to stretch the analogies between ooo and oop too far, this interplay takes place on the inside of an object in ooo, but in oop it crosses borders between objects. But similar to the situation in ooo, objects can come into existence without actively enacting any reality. However, the object structure in oop (which we would call the counterpart to ooo’s real-object-pole) defines what an object can do. This is to be understood as a potential and not as an exhaustive description of the object’s capabilities. In oop, the instance of an object (what we have come to see as its real-qualities-pole) cannot be reduced to the object itself (the real-object-pole) – an object therefore is always more than its rigid structure. If the object has any interface to the outside, which is the case with most objects in oop, there is still no way to know the results of all possible interactions with the object.
Hierarchy and Inheritance
Let us assume all planets in our solar system simulation have been sufficiently defined. We would still need an object representing the sun. The sun is not a planet, but a star, yet there are properties and probably methods both share, something all celestial bodies incorporate. Since its first incarnation in Simula 67, using the object-oriented programming paradigm is synonymous with organizing objects hierarchically in tree-like structures. Every object has at least one parent object (a superclass) and can have child objects (subclasses). An object then inherits all properties and methods of its superclass (or, in some cases, superclasses) and hands them and its own properties and methods down its subclasses, which can then add additional properties and methods. So, both classes representing planets and suns should be derived from a superclass representing any celestial body. This celestial body class would then handle properties and methods shared by all its subclasses. Only methods and data necessary for more specific celestial bodies like planets or stars would be defined in their respective subclasses. In oop, a principle of reversed subsidiarity is at work: anything that can be handled at the highest, most abstract level is being handled there; only more specific tasks are being handled further down the object hierarchy. oop’s terminology, talking of “parent classes,” “child classes,” and “inheritance,” shows the hierarchical tradition in which oop is rooted. Any object in the hierarchy “inherits” all traits from its parent object. Such a hierarchy has at its root an abstract object (CObject in Microsoft’s MFC model), which only consists of abstract methods that make no statement about the specifics of this object at all. Such an object is rarely being used directly by software developers, but only through one of its more concrete subclasses. But not all objects are part of such a hierarchy, like for example the CTime object in the MFC model.[note]Microsoft, “CTime Class,” 2015, https://msdn.microsoft.com/en-us/library/78zb0ese.aspx.[/note] CTime is used to represent an absolute time value. Operations on such a value are very basic and needed in a multitude of methods, but it would be hard to logically position a time object somewhere in an all-encompassing hierarchical system. The question of what a representation of a specific time should be derived from is hard to answer. This concept is too basic to be inserted into a hierarchy. So, while CTime objects can be integrated into custom-made hierarchies, they themselves are not derived from any superclass: representations of time are solitary objects within the MFC model.
Interface and Implementation
Now that we have a small hierarchy of celestial bodies represented in our object-oriented program design, we still face the task of implementing the actual simulation algorithm. Discussing this algorithm itself is outside our scope. We are more interested in where such an algorithm would be placed in an object-oriented design. This touches a key question of any object-oriented system: where and how do processes take place? Do they happen within objects, between objects, or in both places? While Simondon stresses the notion of objects as being through becoming,[note]Simondon, “The Genesis of the Individual.”[/note] the concepts of both oop and ooo define objects qua their relative stability. In object-oriented ontology, real objects need sensual objects as a bridge between them, leading to a chain of objects. Sensual or real objects cannot touch each other directly. The sensual object acts as an interface between real objects – or the real object as the interface between sensual objects. In object-oriented programming, objects cannot touch directly as well: they are broken down in interface and implementation parts. The interface part acts as an – incomplete – directory of methods and variables made available to other objects. It never exposes everything on an object’s inside to the outside. It can even announce methods, which at the time of such an announcement are not even fully defined. Only when these methods are being invoked, a real-time decision will be made in regard to which version of the method would be appropriate to use in the current situation. So, oop’s interface is on the one hand a sensual object since it serves as the interface to other objects while not exposing the whole enactability on reality of its real object – which would be the implementation. Methods can execute different code, depending on criteria inaccessible from the outside, allowing for a program to change during runtime without damaging the object’s identifiability. The implementation part on the other hand represents the real object in the totality of its enactability in the program. As for the solar system simulation, in object-oriented programming the obvious implementation would be a superclass representing all the components of a solar system needed for its simulation on a celestial bodies’ level. An instance of such a solar system class would then have to incorporate member classes for every celestial body in the solar system. But which object would be the one to describe the relations between all the data and methods of the solar system object? One could create methods in the solar system class that would contain the algorithm needed for the simulation, like modifying a planet’s position in space depending on the position and movement of other celestial bodies as time progresses. But the intended way of handling such a simulation is a technique called message-passing. Objects can send and receive messages. The concept of message-passing allows for messages to be sent to an object, which then decides how to handle the message. This way an object is able to handle requests dynamically, depending on the type of data sent to it. This illustrates how both sides in an object-to-object interaction are involved. This interaction is not a simple sender-receiver relationship, but a rich exchange in which both objects involved do not fully touch each other, but are selective with regards to which input to accept at all. An object representing a planet could send a message to other planet objects, informing them about its own location in space. These other planets then would change their position in space accordingly. This way one could create a very simple simulation of gravity, but none of the objects involved would have any access to other object properties not needed for the calculation of gravitational effects. So, message-passing is not just a concept of inexhaustibility, it is also a concept of indirection. Objects do not exhaust each other, they do not even touch directly, but they communicate by messages, which can be seen as an implementation of the concept of sensual objects.
Inexhaustibility of Programs
Let us go back to the solar system simulation example one last time. We found that the object ontology offered by object-oriented programming languages is a lax one, since there can be objects outside the hierarchy. The solar system object, the object which hosts our simulation, would need to be instantiated at some point, since it cannot create itself. There has to be code outside the solar system class. Of course, there might be another object, which again incorporates the solar system class (a superclass to the solar system) representing a galaxy. But the Milky Way is not useful for simulating the gravitational effects in our solar system, and this would just move the problem to another level. The object-oriented programming paradigm is an abstraction from the hardware the program will eventually be running on, since the central processing unit (CPU) does not “know” objects. The compiler or interpreter program must have done its task of translation to machine code before the CPU can run the program – and after this translation the object concept is lost to the CPU. These translator programs reduce object-orientation to a very basic sequence of memory operations, which the chip can process. This would only change if object-oriented hardware were being built, hardware that would render compilers or interpreters useless – but object-oriented chip designs like the Intel iAPX 432, which was introduced in 1981, eventually failed. They were slow and expensive and new technologies more suitable to the limitations of hardware prove more efficient – and so the idea of object-orientation in chips has only found very limited application.[note]David R. Ditzel and David A. Patterson, “Retrospective on High-Level Language Computer Architecture” (ACM Press, 1980), 97–104, doi:10.1145/800053.801914.[/note] Programming languages came a long way in the last 60 years. They moved from a primitive set of commands in order to directly access a processor’s memory to complex semantics, completely abstracted from the hardware its programs will run on. All high-level programming languages need an intermediary between statements made in such a language and the hardware programs are supposed to run on – these intermediaries are either compilers (programs that in a time-consuming way translate high-level programming languages to machine code the processor can work with) or interpreters (which basically fulfill the same task in real-time). In any case, there is a medium between the high-level language and the machine.[note]A new generation of chips might end this separation. FPGAs are chips whose hardware can be modified by means of software, effectively blurring the line between software and hardware.[/note] While objects in object-oriented ontology are described as broken down in a real and a sensual part (what we superficially likened to the concepts of implementation and interface in programming), we need to understand that the whole relation of the statements made in a high-level programming language to the hardware the written program will run on is the relation of model and reality. The hardware of the chip forms the ultimate reality of the program, since the hardware defines the reality against the model put on top of it must work. The reality of the hardware again is its context, the wider environment of the machinery, its applications, and the people using it. The limits of a program’s enactability of its reality are in the hardware it runs on and the time available. A self-modifying program could enact an infinite amount of reality given there is enough time. So, the real object is inexhaustible by the relations it enters into with sensual objects. Programs running on a chip can never exhaust it. It is impossible to list all the programs that could be executed on the chip. It is not even possible to know in advance if all these programs will actually come to an end. Alan Turing described this phenomenon, which later became known as the “halting problem”: it is undecidable if an arbitrary computer program will eventually finish running or will continue running forever.[note]A. M. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society s2-42, no. 1 (January 1, 1937): 230–65, doi:10.1112/plms/s2-42.1.230; A. M. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction,” Proceedings of the London Mathematical Society s2-43, no. 6 (January 1, 1938): 544–46, doi:10.1112/plms/s2-43.6.544.[/note] The halting problem extends inexhaustibility to the proof of inexhaustibility. Object-oriented ontology aims at treating all objects equally – which rules out a central perpetrator. In object-oriented programming, it seems that there is no central perpetrator as well and objects act independently from a central instance. In reality, object-orientation today is a paradigm put on top of hardware, which is incapable of working without a central perpetrator. So, while the language in which the program is modeled, is object-oriented, it is important to understand that these objects are constructions in a language, which again tries to mimic things and relations in reality. Objects act on behalf of themselves as long as one stays at the object’s level of abstraction. On the chip’s level these objects are nonexistent – the CPU only acts upon memory, where certain information is stored. The CPU and the operating system will make decisions without the objects “knowing,” for example for dispatching: since programs today mostly run on computers with more than one central processing unit, it is necessary to distribute tasks (or object methods) to different CPUs. The intuition of being surrounded by objects with a certain independence from each other is at the root of both models, oop and ooo. But object-oriented ontology rejects the concept of a reducibility of objects to other objects: even though every object can be broken down to its parts (representing new objects): these objects do not exhaust the bigger object they form. There is nothing “below” objects in ooo. oop however is a model, which is deliberately put on top of the more primitive and non-intuitive computational concept of memory. This shows how object-oriented programming works only at a certain level of abstraction, thus constituting the major difference between object-oriented programming and object-oriented ontology: the earlier being a model applied pragmatically in one domain, the latter aiming for a complete metaphysics. ---
Armstrong, Joe. Coders at Work: Reflections on the Craft of Programming. Edited by Peter Seibel. New York: Apress, 2009.
Bellini, Alessandro. “Is Metaphysics Relevant to Computer Science?” Mathema, June 30, 2012. http://www.mathema.com/philosophy/metafisica/is-metaphysics-relevant-to-computer-science/.
Berry, David M. Critical Theory and the Digital. Critical Theory and Contemporary Society. New York: Bloomsbury, 2014.
Biancuzzi, Federico, and Shane Warden, eds. Masterminds of Programming. Sebastopol, CA: O’Reilly, 2009.
Brassier, Ray, Iain Hamilton Grant, Graham Harman, and Quentin Meillassoux. “Speculative Realism.” In Collapse, edited by Robin Mackay, III:306–449. Oxford: Urbanomic, 2007.
Ditzel, David R., and David A. Patterson. “Retrospective on High-Level Language Computer Architecture,” 97–104. ACM Press, 1980. doi:10.1145/800053.801914.
Harman, Graham. Bells and Whistles: More Speculative Realism. Winchester: Zero Books, 2013.
———. The Quadruple Object. Winchester: Zero Books, 2011.
Kant, Immanuel. Critique of Pure Reason. Edited by Paul Guyer and Allen W. Wood. The Cambridge Edition of the Works of Immanuel Kant. Cambridge: Cambridge University Press, 1998.
Meyer, Bertrand. Object-Oriented Software Construction. Prentice-Hall International Series in Computer Science. New York: Prentice-Hall, 1988.
Shapiro, Alan. Die Software der Zukunft oder: das Modell geht der Realität voraus. International Flusser lectures. Köln: König, 2014.
Simondon, Gilbert. “The Genesis of the Individual.” In Incorporations, edited by Jonathan Crary and Sanford Kwinter, 297–319. New York: Zone, 1992.
Tarko, Vlad. “The Metaphysics of Object Oriented Programming,” May 28, 2006. http://news.softpedia.com/news/The-Metaphysics-of-Object-Oriented-Programming-24906.shtml.
Turing, A. M. “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society s2-42, no. 1 (January 1, 1937): 230–65. doi:10.1112/plms/s2-42.1.230.
———. “On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction.” Proceedings of the London Mathematical Society s2-43, no. 6 (January 1, 1938): 544–46. doi:10.1112/plms/s2-43.6.544.
"We are seduced by the interface into neglecting the work behind it, and the operationalization and instrumentalization of dreams that takes place. The interface appears mythical, absolute and frozen."