From synthetic media to ethical chatbots. How can humans be redefined with artificial intelligence?

Interview with Katalin Feher Ph.D. Fulbright Research Scholar

The original Hungarian interview was published by Digital Hungary

Interview by Anna Debreczeni

When someone feels that they are under control or being manipulated, it usually breeds resistance — in the case of an adult or a child too. When somebody meets a new technology the first reaction is usually distrust, insecurity, or even fear, which is perfectly legitimate — but the story does not end here. We interviewed Katalin Feher Ph.D., a Fulbright Scholar as a new media and socio-cultural AI researcher.

Have the algorithms already taken control of us, or are we still making our own decisions?

I would approach the issue from a different angle. Algorithms are interesting because they are scary for many. They are elusive, complex, and often invisible processes in a black box technology. On the other hand, fear-based decisions and judgments do not help to face their reality.

If we understand the algorithms a bit better, maybe we will reconcile with them?

It is not essential to understand the operation of the algorithms in detail. Even when you start a car, many people cannot tell you exactly how the propulsion system works. In the case of algorithms, it is not important for problem-solving to understand the exact sequence-structure of instructions. The usability of the algorithm is important to understand, namely when and what algorithms are good for finding a solution. This can be managed by specific inputs, such as the method of use, and then, certain operations or results are expected. We are now in an era where these inputs need to be well defined.

What does an input mean?

For instance, an input can be a business goal to optimise an advertisement space, reach more people with this, and at the same time, get feedback on the success of the task, which I can turn into even better reach. How does this algorithm solve this? It is up to the algorithm. If the right quantity/quality of data and input algorithms are available for this task, then artificial intelligence can develop even more efficient measurements and feedback. Thus, this process will refine and improve the optimisation in every further step.

At this point, artificial intelligence (AI) is placed in the picture

Exactly. In this case, we are no longer just talking about algorithms but also about artificial intelligence, primarily as machine or deep learning. AI technology works with a set of algorithms and allows the update of previous results to structure data or to search for repetitions and patterns in an unstructured data set. The whole process is very similar to teaching a child. Diverse inputs are available from family and friends to school and media environment, but what the child learns or recognises and what they use to communicate more effectively, is up to them. Moreover, the learning analogy is also interpretable from a parental point of view. With the first child, each parent learns how to take care of someone and what needs to be done to ensure that certain fears are not substantiated. With the second child, parents are already going through a known learning process, so they have less fear by default.

And where is this business in this technology?

Simply, two approaches are in the spotlight. On the one hand, AI helps to reduce costs. For example, monotonous customer service work can be outsourced to chatbots with automation. On the other hand, specific automation services can enhance the user experience and engage the customers’ subscription. Translating this into the media industry, the technology-based monetisation results in automated services and extended or synthetic media.

We do not often hear these terms. Could you give examples of them?

The terms are relatively new. For example, extended media are produced to a key media product, such as a Netflix series, that requires you to produce promotional videos tailored to different platforms and consumer segments in large quantities and with good quality using automation. This can also be supplemented with an extra option for fans or influencers to edit themselves into the promotional video. The best examples of synthetic media are “talking heads,” which is currently a stronger direction in East Asia but is heading for us at full steam. More and more non-existent, but real-looking, human-like characters are placed on the screen. They have many benefits, such as they are never sick, don’t get tired, and don’t want to have a salary. They are the so-called anchors so that synthetic newsreaders or influencers interlink to them to stream and get services. In contrast to human celebrities with an insight into their privacy and scandals, synthetic speaking heads rather support brands or identify content services.

It is therefore worth paying attention to East Asia when applying artificial intelligence in the media?

Absolutely. Alibaba, for example, just launched a service recently for the mass-produced short videos being made around the world. The TikTok small video-sharing community and other short video-based services are starting to quietly become a huge $30–40 billion market. In China alone, more than 100 million short videos are watched every day. That is why Alibaba has created an artificial intelligence-based media service that automatically segments and breaks down videos into even smaller scenes and templates. In parallel, the service controls video quality or audience engagement. Thus, we can edit hundreds or thousands of video sequences from a longer video in a short time — flooding video sharing with trailers. However, we do not have to go to China. The BBC already has a service for radio listeners to interact with the current radio show for a specific use. These kinds of applications can also effectively increase consumer engagement and social sharing.

It is also amazing to realise how much data is available for these platforms.

Is that not dangerous?

A very interesting question is how ethical considerations appear in AI-powered systems. The use of AI broadly affects most industries and sectors. Therefore, it is necessary to reconsider philosophical ethics and redefine human beings. The question is how technology reliably serves humans. For instance, “deepfake” is a deception questioning the previously known mediation of physical reality. It creates societal insecurity and can lead to a loss of trust, which cannot be a business goal. Another example is also worth mentioning. A research project in Berlin used “fallible chatbots” in conversations where they shared their faults and dilemmas as if they were real humans. According to the results, human participants of the research paid much more attention to them and their advice, than to the classic, robotic-like chatbots, even though the participants knew they were not real people. We are just a step away from being able to influence people in any direction.

I could not calm down from that.

However, there is also an approach that it does almost no matter what machines do. The efficiency and success of artificial intelligence technology does not depend on machine learning in ninety percent of cases. It depends on the purpose of why we use it, how we design the structure of the systems and how many approaches are analysed in this process with experts and researchers. According to Mari-Sanna Paukkeri, who is an expert in artificial intelligence-based text analysis, a multidisciplinary approach supports the human expectation with the logic of the moral golden rule “don’t do unto others what you would not want done unto you.” Following this approach, AI Ethics is useful for fine-tuning and companies are also interested in it. This is indicated, for example, by Google’s decision to delete cookies, changing the quality of the offers. Likewise, the example of the already mentioned fallible chatbots shows why it is critical not only to formulate the general purpose of an AI service but also to develop ethical bots or ethical social media bots. The movement is the ‘AI for Good’ with top-down business and policymaking and also, bottom-up recommendations by NGOs with information self-determination. The guiding principle is that if we use technology well with ethical considerations, we will get closer to the welfare society.

Practice shows that even with the best of intentions biases occur. It happened for example with Amazon a few years ago when applicants’ resumes were prefiltered by AI and the system favoured men. The reason was that AI had studied the resumes received in the previous 10 years when the vast majority of resumes were sent by men.

Let us have no illusions. There is no completely neutral artificial intelligence operation. Nor is the human world neutral from where the data comes from. I do not believe in an AI-supported world where everything is balanced, perfect, and homogeneous because it would be boring and monotonous at the same time. The diversity and values ​​of cultures and societies should be well preserved, thereby also engage users.

Yet, how could we prevent a future when algorithms discriminate, deepening divisions?

That is why it is important to involve different disciplines or professional fields in development and regulation. Even more countries and companies announce strategic documents on artificial intelligence. They also emphasise the ethical considerations, mentioning the efforts in protected privacy or avoiding discrimination. The strategic documents contain scientific-technological descriptions, and also include specific economic goals and expectations with a 5–10 year duration and GDP growth. The U.S. is so prioritising the issue that the day after Joe Biden’s inauguration, a new version of the U.S. artificial intelligence strategy had already been released.

What are the latest developments in this field in Hungary?

Hungary’s AI strategy is already available. My research was conducted with representatives of various industry members of the Hungarian Artificial Intelligence Coalition. According to the findings, the highest expectations about AI are in the field of health services and biotechnology, but also, trade, sales support, telecommunications and media are in the top ten. In terms of media, however, relevant comprehensive and global research is still missing. A new initiative, the global AI Media Research is responding to this. The hub connects research in AI-powered media or AI-driven media with experts, projects, and analysis.

You are also investigating the cultural aspects of artificial intelligence. How does this approach help the ethical considerations to be more effective?

According to Valentine Goddard, who is an expert in AI Ethics and also an art curator, ethical artificial intelligence requires three things: fairness, accountability, and transparency. To the latter, I would emphasise that not only are social discussions and expertise important but also NGOs. However, they are not yet in a position to influence AI ethics more deeply. Art can effectively support the understanding of AI technology with feelings and perspectives. Thus, AI technology can be closer to people. It can also help us to live with artificial intelligence as an independent, autonomous human being, without our fears, but seeing the opportunities.

Fulbright Research Scholar in socio-cultural AI and AI media