New scientific understanding and engineering techniques have always impressed and frightened. No doubt they will continue to do so. OpenAI recently announced that it anticipates “superintelligence” — AI surpassing human abilities — this decade. It is accordingly building a new team, and devoting 20% of its computing resources to ensuring that the behaviour of such AI systems will be aligned with human values.
It seems they don’t want rogue superintelligent AIs waging war on humanity, as in James Cameron’s 1984 science fiction thriller, The Terminator (ominously, Arnold Schwarzenegger’s Terminator is sent back in time from 2029). OpenAI is calling for top machine-learning researchers and engineers to help them tackle the problem.
But might philosophers have something to contribute? More generally, what can be expected of the age-old discipline in the new technologically advanced era that is now emerging?
To begin to answer this, it is worth stressing that philosophy has been instrumental to AI since its inception. One of the first AI success stories was a 1956 computer program, dubbed the Logic Theorist, created by Allen Newell and Herbert Simon. Its job was to prove theorems using propositions from Principia Mathematica, a 1910 three-volume work by the philosophers Alfred North Whitehead and Bertrand Russell, aiming to reconstruct all of mathematics on one logical foundation.
Indeed, the early focus on logic in AI owed a great deal to the foundational debates pursued by mathematicians and philosophers.
One significant step was the German philosopher Gottlob Frege’s development of modern logic in the late 19th century. Frege introduced the use of quantifiable variables — rather than objects such as people — into logic. His approach made it possible to say not only, for example, “Joe Biden is president” but also to systematically express such general thoughts as: “there exists an X such that X is president,” where “there exists” is a quantifier, and “X” is a variable.
Other important contributors in the 1930s were the Austrian-born logician Kurt Gödel, whose theorems of completeness and incompleteness are about the limits of what one can prove, and Polish logician Alfred Tarski’s “proof of the indefinability of truth.” The latter showed that “truth” in any standard formal system cannot be defined within that particular system, so arithmetical truth, for example, cannot be defined within the system of arithmetic.
Finally, the 1936 abstract notion of a computing machine by the British pioneer Alan Turing drew on such development and had a huge impact on early AI.
It might be said, however, that even if such good old-fashioned symbolic AI was indebted to high-level philosophy and logic, the “second-wave” AI, based on deep learning, derives more from the concrete engineering feats associated with processing vast quantities of data.
Still, philosophy has played a role here too. Take large language models, such as the one that powers ChatGPT, which produces conversational text. They are enormous models, with billions or even trillions of parameters, trained on vast datasets (typically comprising much of the internet). But at their heart, they track — and exploit — statistical patterns of language use. Something very much like this idea was articulated by the Austrian philosopher Ludwig Wittgenstein in the middle of the 20th century: “The meaning of a word,” he said, “is its use in the language.”
But contemporary philosophy, and not just its history, is relevant to AI and its development. Could an LLM truly understand the language it processes? Might it achieve consciousness? These are deeply philosophical questions.
Science has so far been unable to fully explain how consciousness arises from the cells in the human brain. Some philosophers even believe that this is such a “hard problem” that it could be beyond the scope of science and may require a helping hand from philosophy.
In a similar vein, we can ask whether an image-generating AI could be truly creative. Margaret Boden, a British cognitive scientist and philosopher of AI, argues that while AI will be able to produce new ideas, it will struggle to evaluate them as creative people do.
She also anticipates that only a hybrid (neural-symbolic) architecture — one that uses both logical techniques and deep learning from data — will achieve artificial general intelligence.
Human values
To return to OpenAI’s announcement, when prompted with our question about the role of philosophy in the age of AI, ChatGPT suggested to us that (amongst other things) it “helps ensure that the development and use of AI are aligned with human values.”
In this spirit, perhaps we can be allowed to propose that, if AI alignment is the serious issue that OpenAI believes it to be, it is not just a technical problem to be solved by engineers or tech companies, but also a social one. That will require input from philosophers, but also social scientists, lawyers, policymakers, citizen users, and others.
Indeed, many people are worried about the rising power and influence of tech companies and their impact on democracy. Some argue we need a whole new way of thinking about AI — taking into account the underlying systems supporting the industry. The British barrister and author Jamie Susskind, for example, has argued it is time to build a “digital republic” — one which ultimately rejects the very political and economic system that has given tech companies so much influence.
Finally, let us briefly ask, how will AI affect philosophy? Formal logic in philosophy actually dates to Aristotle’s work in antiquity. In the 17th century, the German philosopher Gottfried Leibniz suggested that we may one day have a “calculus ratiocinator” — a calculating machine that would help us to derive answers to philosophical and scientific questions in a quasi-oracular fashion.
Perhaps we are now beginning to realise that vision, with some authors advocating a “computational philosophy” that literally encodes assumptions and derives consequences from them. This ultimately allows factual and/or value-oriented assessments of the outcomes.
For example, the PolyGraphs project simulates the effects of information sharing on social media. This can then be used to computationally address questions about how we ought to form our opinions.
Certainly, progress in AI has given philosophers plenty to think about; it may even have begun to provide some answers.
Anthony Grayling, Professor of Philosophy, Northeastern University London and Brian Ball, Associate Professor of Philosophy AI and Information Ethics, Northeastern University London
This article is republished from The Conversation under a Creative Commons license. Read the original article.