The question of the place of mind in nature is one of the oldest and deepest questions in philosophy and the sciences. If the physical sciences to tell us ‘what nature is’, what happens to mental phenomena: thoughts, beliefs, desires, emotions, memories, and ideals? For centuries, philosophers, theologians, and scientists argued about what a “science of the soul” – a logos of the psyche – would be. In the 19th century, the rise of experimental psychology and electrophysiology seemed to put a scientific image of mind within our grasp. Yet the concept of intentionality seemed elusive. How could one bit of matter, a thing, be “about” another thing, another piece of matter? We can have thoughts about things that don’t exist – but if intentionality is a relation, how could there be a relation when one of those things doesn’t even exist?
In the middle of the 20th century, a promising solution to this problem was proposed by the American philosopher Wilfrid Sellars (1912-1989). Ironically, almost no one at the time (and few since) understood his solution. This is because his solution drew upon the science of cybernetics, and by the time Sellars was being read widely in the 1960s, the cybernetics of the 1940s had already been forgotten. To gather up, to re-collect, what has been forgotten and neglected, will help us understand Sellars’s achievement in the philosophy of mind – and how much more work remains to be done.
What Cybernetics Was (and Why It Was Forgotten)
We live in an age of cyber_____: cybersecurity, cyberwarfare, cyberpunk, cybersex, cyberbullying, and even Cyber Monday. We know that “cyber” indicates something to do with the Internet – though the ancient Greeks (fortunately for them) lacked this concept. So where does this term come from? The origin lies in a Greek word, kybernetes, a steersman on a boat – from which we also have the word “governor”. But it was the American polymath Norbert Wiener (1894-1964) who took this word and turned it into cybernetics: the science of control and communication in animals and machines (Wiener 1948).
Cybernetics, as understood by Wiener and others, was the science of systems that included a feedback loop. A feedback loop was necessary for signal processing in electronic communications and computing machines. But this meant that what the system did, how it affected its environment, could also affect the system itself. What the system did made a difference to what it could do. This promised both a new science of self-controlling mechanisms and a strictly mechanistic implementation of the dynamical interaction between organisms and their environments.
To understand why cybernetics was appealing to Sellars and also why this appeal was generally not recognized by Sellars’s admirers and critics (even to this day), it is helpful to set cybernetics in historical context (see Dupuy 2009; Kline 2015; Pickering 2010; Rid 2016). Cybernetics was largely a science of the Second World War in at least two major respects. Firstly, because the problems that cybernetics were interested in solving, about using feedback loops to generate predictions in complex systems, were problems that arose in WW II anti-aircraft attacks. Secondly, because during the war there was extensive financial support, both government and private, for interdisciplinary collaboration. After the war, government funding turned towards basic science that tended to be housed in disciplinary “silos”.
By the 1960s, when Sellars is making use of cybernetics in thinking about the nature of the mind, cybernetics was well on the way out. There were several contributing factors, personal and political, to this eclipse. Among them, (1) cybernetics had limited real-world successes, even as at the same time (2) it had become so popular as a term and concept that it no longer seemed properly scientific, and especially (3) cybernetic concepts had become entrenched in the Counterculture (e.g. Stewart Brand’s popularization of cybernetics in his Whole Earth Catalog); at the same time, (4) Wiener refused to accept Department of Defense funding and insisted on being involved in cybernetics, which led to (5) the decision to stop using the term cybernetics and start using terms like “artificial intelligence” and “computer science”. By the time Sellars refers to “recent cybernetics” in an article published in 1960, there was little reason to think that his audience would know what he was referring to.
- How Sellars Uses Cybernetics to Naturalize Intentionality
Sellars assigns cybernetics both a marginal and central role to his philosophy of mind. He only mentions it once – but this is on the last page of the second essay in his collection Science, Perception, and Reality. Sellars choose the order of the essays in SPR, beginning it with his metaphilosophical programme, “Philosophy and the Scientific Image of Man” (Sellars 1963a). If the first essay argues that the task of philosophy is to reconcile the scientific image of humanity with the accumulated wisdom of the philosophia perennis, the next essay will show us how to begin doing that. And that is the project of Sellars’s essay “Being and Being Known” (Sellars 1963b), in which cybernetics plays a central role and is mentioned by name in the concluding paragraph.
“Being and Being Known” has a complex dialectic for its argument, and it has not received as much sustained commentary and analysis as many of Sellars’s other essays. However, I think that its placement in SPR indicates how important it is to Sellars’s philosophical system. The first half of the essay is a complex analysis of some insights in Aristotelian philosophy of mind that were neglected in the subsequent Cartesian and post-Cartesian traditions. But Sellars insists on not only accepting intentionality but also explaining intentionality in mechanistic terms. To do this, Sellars turns to cybernetics.
The turn to cybernetics is announced by asking us to consider how problems of intentionality would appear to an electrical engineer tasked with designing and building an extremely sophisticated robot. We are asked to imagine a robot that can (1) emit electrical impulses from its body; (2) receive ‘echoes’ as those impulses are reflected towards it from the environment; (3) use transducers to convert those echoed impulses into computational states; (4) manipulate those computational states in rule-governed ways; (5) convert the results of those manipulations into impulses that terminate in effectors; (6) thereby navigate the environment in order to build an increasingly comprehensive cognitive map of that environment (Huebner 2018; Sachs 2018).
If a robot could do all of that, Sellars suggests, it would be able to implement all of the functional roles that we refer to with mentalistic terms such as “observe”, “infer”, and “act”. Yet we could always adopt the perspective of the engineer and look ‘under the hood’ to observe nothing more than highly complex machinery at work. Such a robot would be a genuinely cybernetic device because the transducers and effectors that structurally couple the robot to its environment enable complex dynamical feedback loops
Importantly, the computational states of the machine-brain are only fully intelligible in light of their functional integration with the body and the environment; cybernetics, unlike later cognitive science, does not neglect the importance of ‘brain’-body-environment feedback loops. But it does main that the ’brain’ component of highly complex system can be understood in computational terms, at least with regard to psychological explanations.
At this point we have not yet introduced genuinely intentional vocabulary, such as “means” or “is about”, which is crucial for attributing propositional attitudes (e.g., beliefs, desires) to others or even to ourselves. Sellars’s suggestion is that it is we who would attribute intentionality to the robot in order to communicate with it. In order to coordinate our behavior with its behavior, we would need to interpret its computational states (including the relation of those states with the behavior of its transducers and effectors) as “meaning the same thing” as what we refer to as observations, inferences, and volitions. But just as ‘under the hood’ of the robot there is only complex electrical machinery, under our hood is there is only complex bio-electrical machinery. Intentionality is not found inside the brain, or in the brain-body-environment feedback loops, but rather in the complex dynamical interactions between multiple brain-body-environment loops: those loops which comprise a linguistic community (Stovall 2022; Sachs forthcoming)
- Minds as Machines or as Organisms?
Sellars seems to naturalize intentionality – but at the cost of giving up on what makes minds a distinct feature of living things. Instead, Sellars envisions mindedness as indifferent to whether it is realized in an organism or in a machine. It does not seem to matter that the robot of “Being and Being Known” is not alive – what matters is whether we can use intentional language in order to cooperate with the robot in constructing a shared cognitive map. (Presumably the robot would also learn how to use intentional language in the process.)
At this point it seems as Sellars’s philosophy of mind is a version of machine-state functionalism – though Sellars differs from Fodor (1975) by insisting on the structural coupling of computer to body and to environment. Although Sellars does not have a wholly disembodied conception of mind, neither does he anticipate the rise of anti-representationalist cognitive science (Chemero 2009; Di Paolo et al. 2017; Di Paolo et al. 2018; Thompson 2007; Varela et al. 1991; Wheeler 2005). Does Sellars anticipate a road not taken, a neglected alternative in the history of cognitive science? Or is his version – let’s call it necessarily embodied computational functionalism – inconsistent or implausible?
Before deciding on that question, we should also consider how Sellars’s philosophy of mind should be revised in light of the history of cybernetics itself. Tom Froese (2010; 2011) argues that cybernetics split into two distinct research trajectories: one that led to computer and cognitive science and another that led to second-order cybernetics and autopoiesis. Are there reasons for reading Sellars as closer to one of those research programs than the other?
At the time that Sellars was writing, cybernetics seemed to promise that teleology could be naturalized (Rosenblueth et al. 1943; Wiener 1948) – in fact, the Macy Conferences in Cybernetics were called, at first, the Macy Conferences on Teleological Mechanisms and Circular Causality. I believe that it is for this reason that Sellars turns to cybernetics in order to naturalize intentionality. If Sellars understood that intentionality is a teleological concept, it would stand to reason that intentionality could not be naturalized unless teleology were also naturalized. A naturalized conception of teleology is necessary for a naturalized conception of intentionality.
But must a naturalized conception of teleology be a mechanized conception? If so, then naturalizing teleology would entail that the distinction between organisms and machines is irrelevant to cognitive science and philosophy of mind. Sellars, following the early cyberneticists, clearly seems to think so – if he didn’t, we could not attribute intentional states to the “Being and Being Known” robot. By contrast, recent developments in 20th and 21st century theoretical biology, building on second-order cybernetics, suggest that we can naturalize teleology without rejecting the distinction between organisms and machines. For example, we could think of teleology as an emergent property of the structural coupling between an organizationally closed system and the environment to which it is thermodynamically open (Moreno and Mossio 2015); similarly, it may be reasonable to reconceptualize teleology in thermodynamic terms as “teleodynamics” (Deacon 2012).
Given the possibility of a non-mechanistic naturalization of teleology, it would remain to be seen how much of Sellars’s proposal to naturalize intentionality using cybernetics could be re-conceptualized. We would certainly need to criticize the adequacy of a mechanical robot for a thought-experiment illustrating the place of mind in nature. But it would be a problematic return to vitalism to say that there is something magical about life itself. We would need to augment Sellars’s thought-experiment in light of contemporary theoretical biology in order to equip the robot with the kind of organizational closure and thermodynamic openness that enables the emergence of teleology in living organisms. Doing that would, I think, advance the quest for a scientific image of mind for the 21st century.
References
Chemero, Anthony. 2009. Radical Embodied Cognitive Science. The MIT Press.
Deacon, Terrence. 2012. Incomplete Nature: How Mind Emerged From Matter. W. W. Norton and Company.
Di Paolo, Ezequiel, Thomas Buhrmann, and Xabier E. Barandiaran. 2017. Sensorimotor Life: An Enactive Proposal. Oxford University Press.
Di Paolo, Ezequiel, Elena Clare Cuffari, and Hanne De Jaegher. 2018. Linguistic Bodies: The Continuity between Life and Language. The MIT Press.
Dupuy, Jean-Pierre. 2009. On the Origins of Cognitive Science. Trans. M. B. DeBevoise. The MIT Press.
Fodor, Jerry. 1975. The Language of Thought. Harvard University Press.
Froese, Tom. 2010. “From Cybernetics to Second-Order Cybernetics: A Comparative Analysis of Their Central Ideas,” in Constructivist Foundations, 5,2: 75-85.
Froese, Tom. 2011. “From Second Order Cybernetics to Enactive Cognitive Science: Varela’s Turn from Epistemology to Phenomenology” in Systems Research and Behavioral Science, 28, 6: 631-645.
Huebner, Bryce. 2018. “Picturing, Attending, and Signifying” in Belgrade Philosophical Annual 31: 7-40.
Kline, Ronald. 2017. The Cybernetics Moment: Why We Call Our Age the Information Age. Johns Hopkins University Press, Baltimore.
Moreno, Alvaro and Matteo Mossio. 2015. Biological Autonomy: A Philosophical and Theoretical Enquiry. Springer.
Pickering, Andrew. 2010. The Cybernetic Brain: Sketches of Another Future. University of Chicago Press.
Rosenblueth, Arturo, Norbert Wiener, and Julian Bigelow. 1943. “Behaviour, Purpose and Teleology” in Philosophy of Science 10:18-24.
Rid, Thomas. 2016. Rise of the Machines: A Cybernetic History. W. W. Norton and Company.
Sachs, Carl. 2019. “In Defense of Picturing; Sellars’s Philosophy of Mind and Cognitive Neuroscience”. Phenomenology and the Cognitive Sciences 18, pages 669–689 (2019)
Sachs, Carl. 2022. A Cybernetic Theory of Persons: How Sellars Naturalized Kant”. Forthcoming in Philosophical Inquiries.
Sellars, Wilfrid. 1963a, “Philosophy and the Scientific Image of Man” in Science, Perception, and Reality, Atascadero, Ridgeview Publishing Company, 1-40.
Sellars, Wilfrid. 1963b, “Being and Being Known” in Science, Perception, and Reality, Atascadero, Ridgeview Publishing Company, 41-59.
Stovall, Preston. 2022. The Single-Minded Animal: Shared Intentionality, Normativity, and the Foundations of Discursive Cognition. Routledge.
Thompson, Evan. 2007. Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Harvard University Press.
Varela, Francisco, Evan Thompson, and Eleanor Rosch. 1991. The Embodied Mind. The MIT Press.
Wheeler, Michael. 2005. Reconstructing the Cognitive World. The MIT Press.
Wiener, Norbert. 1948. Cybernetics, or Communication and Control in The Animal and the Machine. Cambridge, The MIT Press.
Bill Ross says:
Historical note: I was an undergrad majoring in Philosophy at Pitt in the 70’s, and to my mind no one at all well-read was ignorant about cybernetics in the way depicted here. Remember that the internet didn’t exist, so knowledge wasn’t noticeably dropping into oblivion as now. E.g. not that long ago, some departmental history was linked from Pitt’s department’s page. Four emeriti is all I can find now:
https://www.philosophy.pitt.edu/people/emeritus-faculty
I only remember Sellars by name from then, though I was interested in P of Mind both there and in compsci at Berkeley. Now in retirement, I’m building a cargo-cultish, ad hoc mind using what I call ‘Rorschach pairs’ of photos, labeled {like|meh}, hoping to derive a universal language by navigating a “latent space” / static representation of the relations of the granules of meaning and meaninglessness.
Carl Sachs says:
Bill,
Hello! I appreciate your remark that I got the history wrong. But to my mind, that still leaves unanswered the question as to why the importance of cybernetics to Sellars was not well-recognized? It’s possible that I’m mistaken, and it’s not really that important at all.
Ryan Clark says:
Great article!
…But I still don’t think Sellars succeeds in naturalizing (or physicalizing) intentionality or teleology because his arguments only really deal with intentional-like or teleological-like *behaviors*, but not intentionality or teleology themselves.
IMHO.
Carl Sachs says:
Thank you! But what would be difference between “intentional-like or teleological-like *behaviors*, but not intentionality or teleology themselves”? How would we know whether we’ve got one, or the other?