Chinese media commentator Kecheng Fang gave the talk “More Information, Less Diversity: The Problems with Digital Media and How Can We Fix Them” to the NYU Shanghai community on April 16. He stressed that the AI-driven recommendation algorithms to social media content have built up a strong barrier for ordinary users to diverse opinions, rather than exposed them. Through social media, content less serious but more entertaining drains out people’s attention. This is the major attribute of the most profitable business model at the time — attention economy. As Fang analyses, the product of social media is neither newsfeed nor commercials, but the users themselves. Profits are made through directing the limited attention of users to their promoted clients.
In this respect, the biased data-filtering AI serves as the cash-generating machine for capitalists. Here come some interesting questions. If characterized, does such AI have any perceptions of its feeds? I would answer “yes”. Does it understand or, say, have a mind? I would answer “probably not”. The reason would be that the recommendation AI does have the perceptions of its feeds in a computational way, but not necessarily understanding, which would lead to the awareness of the larger context involved, such as the remarkable achievement in the business world and the following social consequences.
To discuss the problem better, I would like to propose a test analogous to the Turing Test out of this, which I will name it the Chitchat Test. The interrogator, who has to be a moderate social media user, will be chatting with two subjects. One is a real person who shares some similarities with the interrogator, such as his job, educational background, or political standpoint. The other is a conversational AI system that has scanned through his social media records and built a set of profiles of different types, say, political and psychological ones. They will both have access to the internet during the test. The person will also have access to the interrogator’s social media accounts for real-time browsing as well. The interval of every paragraph spoken will be a fixed amount of time, on which the total duration depends, in order to make sure this won’t become a contest of speed. The result will depend on which side has made a more pleasant conversation. If the AI passed, which I bet it has a fairly good chance, we can assume that it, to some extent, thinks or at least understand the interrogator’s mind well enough.
During the Chitchat Test, besides the casual talks that involve no opinions and values, the AI can cluster (perceive) his speech into a certain category of profiles and respond (react) with real-time fetched and organized online data based on a comprehensive mining of his preference. The perceptions are built on its previous profiling (understanding) through abstractions of its feeds into numerous layers of high dimensional matrices. Clearly, the interrogator will find the AI a comfortable talker and it wouldn’t be easy to find a real person that knows him this well. For this hypothetical situation, can we be sure that the AI understands him well? Philosopher John Searle has raised his concern already.
To some extent, the Chitchat Test situation is similar to Searle’s Chinese Room argument that questions the capability of understanding of the player. In the Chinese Room, the interrogator speaks Chinese while the player doesn’t. However, the player is given a dictionary which enables him to respond properly. Searle argues that the interrogator, in this case, cannot be considered understood by the player. It wouldn’t bother the player knowing that “chow mein” is a tasty food but he might have no idea how Chinese fried noodles taste exactly. The same might apply to the AI in the Chitchat Test, in which the dictionary is replaced with social media mining. Satisfactory literal responses won’t necessarily lead to the actual understanding of the topic.
In the Chitchat Test, AI’s understanding of the interrogator is indeed more similar to perceptions of what the person prefers instead of why. In this respect, although AI might find strong evidence on the internet to back its idea, which seems to show the understanding of why, there’s no examination of causations of values and facts in its architecture, so that the seemingly legit reasoning is a simulation done with its correlation-based recommendation algorithms. This argument leads us to the vague boundary between causations and correlations, as David Hume might assert. Therefore, I personally encourage AI to fake (use correlations) it (causations) until it makes it when no one would doubt it.
Last but not least, any of the factors above doesn’t seem to promise the AI the awareness of the larger context of the biased conversation. There’s no conscious of “self” as a biased AI system and reflections on the consequences of the conversation involved at all. AI is by no means capable of understanding what its owners, clients, and products are and the social impact it might cause. However, what if the interrogator is concerned about this issue himself, say, the speech giver Kecheng Fang. That would give the AI some sense of what it’s doing if Fang has comments on the Chitchat Test (which he probably does if the test is real). Although it’s clear that the AI cannot generate any values on its own, it still can perceive how others think about it, which makes the answer to the initial questions a little bit vague.
In a nutshell, the nowadays recommendation AI perceives users’ preference and is capable of representing users’ values, which seems to be understanding. However, the answer to whether it’s truly understanding stays vague and might vary from person to person with the help of the hypothetical Chitchat Test to clarify their thoughts and doubts.
Fang, Kecheng. “More Information, Less Diversity: The Problems with Digital Media and How Can We Fix Them”, April 16, 2018. NYU Shanghai Talk.
Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds and The Laws of Physics. Oxford University Press, 2016.
Turing, Alan. “Computing Machinery and Intelligence.” Mind, vol. 59, bo. 236, October 1950.