The cold, calculating âmindâ of a machine has no capacity for emotion. Alexa doesnât care if you call it names. DeepMindâs AlphaGo will never actually taste the sweet joy of victory. Despite this, theyâre more like humans than you might think. Theyâre nearly always wrong and virtually incapable of being rational.
British statistician George Box famously stated âall models are wrong, but some are usefulâ in a research paper published in 1976. While he was referring to statistical models, itâs well-accepted that the aphorism applies to computer models as well, and thus it makes sense from an AI point of view.
The reason why all models are wrong is simple: theyâre based on limited information. Human perception through our five senses isnât powerful enough to pick up on all available data in a given situation. Worse, our brains couldnât process all the available information even if we were able to gather it.
Tshilidzi Marwala, Vice Chancellor at the University of Johannesburg, recently published a research paper discussing the possibility of rational AI. He explains the problem:
AI models are not physically realistic. They take observed data and fit sophisticated yet physically unrealistic models. Because of this reality they are black boxes with no direct physical meaning. Because of this reason they are definitely wrong yet are useful.
If in our AI equation y=f(x), f the model is wrong as stipulated by Box then despite the fact that this model is useful in the sense that it can reproduce reality, it is wrong. Because it is wrong, when used to make a decision such a decision cannot possibly be rational. A wrong premise cannot result in a rational outcome! In fact the more wrong the model is the less rational the decision is.
Imagine falling out of an airplane and plummeting 15,000 feet without a parachute. Youâre simply not going to be capable of understanding the gazillions (not a technical measurement) of tiny details â like air speed, or a million adjustments-per-second to optimize trajectory, or whatever â necessary to ensure your survival.
But, a birdâs brain understands the nuances of air currents in ways humans cannot. Granted, they have hollow bones and wings. But even without the physical advantages they have a better mind for flight than our advanced human brain. So, theoretically, youâd be better off with a tiny bird brain than your big old human mind, in this particular scenario.
Still, birds arenât rational. Just like humans theyâre trying to avoid making fatal mistakes, not optimize their systems for maximum utility.
The point is, no matter how advanced a system becomes, if it operates on the same principles as the human brain (or any other organic mind), itâs flawed.
The human brain is a wrong-engine, because itâs more useful to apply Occamâs Razor (ie., reduce the potential solutions to either fight or flight) than it is to parse a slightly less-limited set of variables.
AI, currently, isnât any different. It has to either be fed information (thus limiting its access) or be taught how to find information for itself (thus limiting its parameters for selecting relevant data). Both scenarios make AI as much of a âwrong-engineâ as the human brain.
Of course, the only solution is to build rational AI right? Not according to Marwala. His research didnât have a happy ending:
This paper studied the question of whether machines can be rational. It examined the limitations of machine decision making and these were identified as the lack of complete and perfect information, the imperfection of the models as well as the inability to identify the global optimum utility. This paper concludes that machines can never be fully rational and that the best they can achieve is to be bounded rationally. However, machines can be more rational than humans.
Marwala believes that, with the exception of a few convex problems, weâll never have unbounded rationalism â in people or in machines â because itâs impossible to know if a given decision is globally optimized or not.
Whether heâs correct or just hasnât met the solution yet, an interesting byproduct of his thinking is that artificial general intelligence (AGI) isnât very important in the grand scheme of things, unless heâs right. If heâs right, then AGI is the ultimate goal of machine learning.
Weâll need machines that can imitate or beat human-level general intelligence to arrive as quickly as possible so that we can then spend the rest of our speciesâ existence tweaking the formula.
But, if heâs wrong: AGI is a MacGuffin. Itâs a means to get people working on a problem they canât attack just yet: rational AI.
And if you think the idea of sentient robots is a radical one, try wrapping your head around one thatâs borderline omniscient. A machine capable of unbounded-rationality would, by definition, be a near-perfect decision-making machine.
What do you think? Is rational AI achievable or will our future overlords need to evolve like their creators?
Get the TNW newsletter
Get the most important tech news in your inbox each week.