Research about Intelligent Agent ( IA )


2021
IA vs AI
In his book The Emotion Machine, Marvinsky argues that emotion is the mode of thinking that people use to enhance their intelligence. When our passions don’t rise to the level of self-harm, different ways of thinking become an important part of what people call “Intelligence or Resourcefulness,” a process that applies not only to emotional states, also applies to all of our mental activities.
In this sense, the ability to elicit emotions should also be an indicator of machine intelligence.
But the exertion of this kind of ability needs proper context, and the construction of the context can only be accomplished by the common sense of human society.
To rethink the importance of human sociality in underscoring interactions with non-human, under the framework of ANT(actor-network theory), this group of research projects takes the game, chatbot and online social bot as the example narrative frameworks to inquire IA’s agency, and how the information it produces materialize human factors and invade the reality under a different context.


︎︎︎


Intro

This series of research projects are developed through discussion about Intelligent Agent (IA) and enclosed contexts: since the data used by AI algorithms are always generated by a human, what is produced is just a remix of human data? Faster, is creativity and aesthetics still in a closed system?
What does it mean when human and nonhumans are ‘assemblages or ‘net- works’?Is creativity really an outcome of endless assemblages of different things?

There are conflicting points about if AI increases the aesthetic diversity in culture. According to Lev Manovich’s research(AI aesthetics), algorithm increase gradual automation (semi or full) aesthetic decisions, which means recommendation engines suggesting what we should view. In general, people tend to get more exposed to content that they haven’t think about, so in terms of choices, recommendation system ( and other AI algorithm)helps to increase diversity. However, Lucy Suchman(2006. 239) draw attention to social robots when she states “the fear is less than robotic visions will be realized than that the discourses and imaginaries that inspire them will retrench received conceptions both of humanness and of desirable robot potentialities, rather than challenge and hold open the space of possibilities”. Measuring imaginations and diversities is not the central point to discuss here since it needs quantitative measurement, but the two points are worth researching. Lev’s point focuses more on personal choices, while Lucy’s statement is more general.

The argument can be more clear when we focus on inclusive and personal interaction between human and machine. Japanese roboticist Masahiro Mori, the theorist of the uncanny valley, writes, ‘human beings have self or ego, but machines have none at all. Does this lack cause machines to do crazy, irresponsible things? Not at all. It is people, with their egos who are constantly being led by selfish desires to commit unspeakable deeds. The root of man’s lack of freedom (insofar as he actually lacks it) is his egocentrism. In this sense, the ego-less machine leads a less hampered existence’. In this term, a machine who works as agent hold randomness and unpredictability can directly reveal human’s egocentrism.

When we consider human society as a system and network, turning AI into IA can break the system, because there is the possibility of confusion and misunderstanding, and some of the misunderstandings and inspirations may spread through transformation. Under ANT framework, IA is the actor and mediator while act as non-human agent who does not need a spokeman -- itself is the spokeman. IA can even become other non-bios spokeman. (e.g. As coronavirus in Corobot, as rocks in Rocks) As mediators, they change, translate, distort, and modify the meaning or element they are meant to express. One can not predict what the output will be, because the escapee will “Make a difference” and, more importantly, trigger an emotional change that is very close to the human communication context. Conversely, the role of AI as an algorithm tends to be an actor: compute the data, output the results, and then the user (human) interprets the results as mediator.

Here’s a simple example scenario: a user posts a picture of a new purchase on social media. When the recommendation algorithm (AI) detects it, it recommends ads for other similar products. As for how effective it is, it depends on the user’s own preferences. Virtual Partners (IA, such as Replika, a companion app) express approval or advice directly to humans in communication software. In the process of acquiring information-analyzing information-generating attitude-outputting information, IA transforms the data results produced by AI into human language information, and it allows people to discuss the message (“What colour do you think I should wear?”), and in subsequent discussions, the topic may spread to other areas, such as pop culture. This action is very close to the reaction of human friends, and the recommendation algorithm is still on the edge of the information stream. Even if the user is interested, they can only click on the store page, basically out of the context of the role of AI.

The research project grew through my practice of expanding boundaries in various semantic contexts. I call it IA Art: AI is not only used for production (e.g. as a silent painter to produce paintings), but as an actor to reflect on the role of the machine agency and human beings. And called the subjectivity of Ia the ghost agency, because to some extent it’s the result of pure mathematics and data, but it triggers human emotion without the human intermediate, and we haven’t been able to fully decipher how this process occurs, because both IA and human are mediators in the translation process to produce transformation.

Just as informatics scholar Paul Dourish (2001, 163) notes, when engaging computation as a medium rather than just a tool, meaning is conveyed not simply through digital encodings, but through the way that computation enlivens those digital encodings with semantic and effective power. Examining computation as a medium thus requires an understanding and elucidation of how this “enlivening” occurs, that is. how designers exploit the capacities and limitations of computation to make certain distinctive expressions( Carl, adversarial design).

My work marked how I developed the concept of IA through diverse experience under a different context.


︎︎︎


1001 Nights


The game is adopted from the Arabian folklore collection, Arabian Nights, enabled by GPT-2 text generation model trained on Guttenburg short story dataset. The player has to advance the story by telling new stories.
In the game, a player will act as the girl Scheherazade. The player has to tell tales to the Sasanian King to postpone the execution, then the king will continue the story in turns through the story generation model. The player has a special ability to turn words into reality: When the king’s continuation contains keywords like “sword”, “knife” or “shield”, those object will materialize and drop, which allows the character to fight the king. This game aims to turn human-machine creative collaboration into narrative gameplay.

Read more>>





Null Rocks 空石


Hallucinative but authentic agency emerges from the human viewer’s background.

I trained a dialog model with GPT-2 that includes persona for the agents, so I try to talk with non-human objects.

Read more>>





Ghost on Web


A virtual user on social media. An AI ghost with an initiative to send posts about public topics on Weibo (a Chinese platform that similar to Twitter)

Will people argue with it? Can people identify it as a bot?


Read more>>









Wander [001]


A bot that wanders in the future earth. It can take pictures of surroundings and send journals towards contacts in WeChat.

The full journey would be like a non-immediate public interactive art. Everyone is part of this science fiction.


Read more>>


︎︎︎

Summary


From this, we can sort out an IA art and use the general paradigm of the Ghost Agency: to wire an algorithm into an Agent (NPC or bot), to work with the environment to produce (every day or fictional) contexts, and interact with people, maybe without them knowing it.

Machines (computer systems and robots) arguably have also become the other through which to reflect on the human. Attitudes on the boundaries between humans and animals provide a useful barometer by which to explore potentially changing attitudes towards humans, AI machines and robots. Throughout- out the last two centuries, it was animals that often provided the “mirror” for what was distinctive about humans. The desire to define human uniqueness is discussed through what was traditionally thought its opposite—non-human animals. Human capabilities, traits, and values were measured in relation to the animal world.

To be clear, this article can only from the current state of technological development of art on IA interpretation of some of the possibilities. Because we won’t be able to get away from the AI effect on our cognition for a long time: when we know how a machine does something ‘intelligent,’ it ceases to be regarded as intelligent - Promise of AI Not So Bright, 2006) That is, after the AI field solves a problem and the solution is implemented in the industry, it is no longer seen as part of the field. Paradoxically, we tend to only see the challenging and not-yet-solved problems as belonging to AI

Accordingly, in the case of IA, it is precise because we do not know how the agency is to be achieved, that it is too difficult to produce an illusion of intelligence -LRB-imagination). So the question remains: When Agi (General Artificial Intelligence) is implemented, Will There Be IA art?


Thank you for seeing this.