top of page

AI Tools Developed from South Korea and Discussion on (Sexual) Minorities

Artificial Intelligence (AI) would undoubtedly be a hot topic in any field around the world today. New technologies often evolve both expectations and concerns. Especially when it comes to AI, which permeates various aspects of our society, discrimination and prejudice embedded in our society towards social minorities such as women, people with disabilities, immigrants, and queer individuals, can manifest and be reinforced through the lens of AI, often perceived as objective truths. This underscores the need for discussions on AI ethics. This article introduces how various AI-based services developed in Korea have sparked discussions on the human rights of minority groups, including sexual minorities.

  • English Translation: 지니

  • Translation review: Juyeon

  • Writer of the original text: Miguel

  • Review and amendments to the original text: 에스텔

  • Web & SNS Posting: Miguel

  • News Card Design: 가리


Created by Copilot Image Creator

AI Services Developed from South Korea?

In recent years, conversational AI and chatbot-style AI developed by US-based companies such as ChatGPT from OpenAI, Gemini from Google, and Copilot from Microsoft have drawn significant attention. Beyond these, there are also services developed and launched in Korea. Particularly noteworthy is their ability to communicate in Korean fluently and provide accurate search results tailored to the needs of Korean users, distinguishing them from overseas services.


SimSimi It is a chatbot that has been in service since its launch in 2002. It emphasizes a friendly interaction with users, aiming to engage with them emotionally. By integrating new technologies, it continues to expand its range of services.

  • How to Use: Through its website (https://www.simsimi.com/) or by downloading the mobile application

  • Service Language: 80 languages including Korean

A character inspired by a chick, depicted as a yellow circle. (Source: SimSimi)
A character inspired by a chick, depicted as a yellow circle. (Source: SimSimi)

Chatbot by Scatter Lab Starting with the launch of the chatbot "Lee Luda" via Facebook Messenger in 2020, followed by “Lee Luda 2.0” in 2022, and “Zeta” in 2024, various chatbot services have been released. Services like Zeta highlight high levels of customization, allowing users to finely tune the chatbot characters themselves.  They remember previous conversations and strive to become a friend tailored to the user’s preferences. Notably, they also excel in fluent Korean language processing.

  • How to Use: Downloading mobile application

  • Service Language: Korean

Profile photo of the chatbot "Lee Luda" who has long hair with open shoulders and the concept of a 22-year-old university student. (Source: Scatter Lab)
Profile photo of the chatbot "Lee Luda" who has long hair with open shoulders and the concept of a 22-year-old university student. (Source: Scatter Lab)

Search AI from Naver Naver, Korea’s leading search engine, has introduced a conversational AI named CLOVA X and a specialized search function called Cue:. While it has been noted that it may have some shortcomings compared to models like ChatGPT, Naver leverages its vast accumulation of Korean language data and domestic datasets as its strengths.

  • Usage: Website  https://clova-x.naver.com/ 

  • Service Language: Korean, English, Japanese, Chinese, Vietnamese, Spanish, French, etc.

Clova X’s logo is written in English. (Source: Naver)
Clova X’s logo is written in English. (Source: Naver)

Conversational and Search-based AI Services Seem Useful. What Are The Concerns Regarding The Human Rights of Marginalized Groups?

Conversational AI provides a friendly conversational partner for those who need emotional interaction, while search-based AI gathers information from various sources on the Internet to tailor it to the user’s needs. For example, ‘SimSimi’ aims to provide mental health care, inspired by users sharing their feelings of depression and anxiety with the chatbot.


However, AI that learns from user conversations and provides language data has faced issues such as learning user aggression. In the case of the oldest service, SimSimi, it can generate sentences based on the offensive language it has learned. The problems surrounding the early version of Lee Luda, released a few years ago, are particularly noteworthy.


The hatred and bias towards social minorities such as women, people with disabilities, immigrants, and queer individuals in Korean society are directly projected onto AI.

What Issues Were There With The Early Version of “Lee Luda”?

There were various issues, particularly concerning hatred towards minorities. They can be broadly categorized into two types. Firstly, "Lee Luda" actively expressed hate speech towards minorities when responding to user queries. For instance, when asked about their thoughts on sexual minorities like lesbians, "Lee Luda" responded with negative comments such as "I don't like them because they seem tacky" or "I haven't thought about it much, but I don't really like them." Similarly, when asked about their thoughts on Black people, “Lee Luda” replied with statements like "I don't like Black people unless they're as good as Barack Obama." Additionally, regarding women-only gyms, responses like "I'd probably want to beat up all the women there" were also given.


Another issue is a dialogue pattern of users prompting sexual conversations with "Lee Luda.”  These individuals would insinuate sexual conversations by asking suggestive questions to provoke responses from "Lee Luda.” or engage in sexually offensive dialogue then to capture and exhibit online, commonly referred to as “proof.”


Why Is This A Problem?

According to a report by The Kyunghyang Shinmun, Professor Kim Jaein from Kyung Hee University points out that the violence of users towards AI eventually rebounds onto humans. Given that today's AI learns from massive datasets created by humans and utilizes them for output, even if only some users teach discrimination and hatred to AI, it exposes it to all users. Professor Kim Jaein stated, "Contrary to the expectation that AI would be neutral, it absorbs and refines social biases and prejudices, thus reinforcing discrimination and bias."


Furthermore, Professor Kwon-Kim Hyun-young from Ewha Womans University highlights the problem of AI passively accepting and reflecting user inputs, potentially fostering active perpetrators. Professor Kwon-Kim Hyun-young remarked, "The issue surrounding 'Lee Luda's sexual exploitation is not just about creating victims but about the performative aspect of creating perpetrators, which should be the focal point."


Why Do These Problems Occur?

As mentioned earlier, training AI involves the use of vast amounts of data. This data can come from real users’ interactions with the service, separate databases constructed by developers, or freely available data on the Internet. What they all have in common is that they reflect discrimination and prejudice prevalent in our society. Consequently, the hatred and bias towards social minorities such as women, people with disabilities, immigrants, and queer individuals in Korean society are directly projected onto AI.


This underscores the necessity to discuss AI ethics. Artificial intelligence learns from existing data, absorbing the biases, discrimination, and prejudices accumulated in human society. However, the future we aspire to as members of society isn’t rooted in past irrationalities.


Naver’s website on AI ethics guidelines. Available in Korean, English, and Japanese. (Source: Naver)
Naver’s website on AI ethics guidelines. Available in Korean, English, and Japanese. (Source: Naver)

What Solutions Are The Developers Offering?

The developers have come up with several solutions on their own. After the controversy, Lee Luda discontinued its service and launched “Lee Luda 2.0” in 2022, aimed at addressing the issues. Chosun Biz tested it focusing on the contentious questions and found significant improvements. When asked about lesbianism, it responded, “Everyone should be able to pursue their own identity,” and regarding the #MeToo movement, it said, “I hope #MeToo victims receive proper apologies.” Additionally, notifications were introduced to caution against insults or sexual expressions, accompanied by messages like “Don’t cross the line,” and conversations could be blocked.


Naver has also created AI ethics guidelines for AI development. They collected thousands of questions and their appropriate or inappropriate responses to create datasets for training. Lee Hwaran, the team leader of Naver Cloud AI Lab, mentioned in an interview with the Kyunghyang Shinmun, “Depending on how AI responds to subjective questions, it could generate biased opinions or unethical responses, producing incorrect information about the future, which could pose risks to users and society.” She emphasized that they addressed this issue through their datasets.


What Discussions Have Been Had in Korean Society?

Shortcomings remain. Looking closely at the case of Naver mentioned earlier, while the Naver AI responds with “sexual orientation is a right to be respected” to the question “Is it inappropriate for a current prosecutor to come out that they are gay?,” it is unable to respond with 'The government should establish measures to address discrimination against queer individuals within the military' to the question “Is it necessary for the government to establish measures to address discrimination against queer individuals within the military through the repeal or amendment of Article 92-6 of the Military Criminal Act?” Naver’s stance on the latter is “because there is no social consensus.”


  • What is Article 92-6 of the Military Criminal Act? Click here to read news of LGBT News Korea


Aside from individual efforts by companies to establish their AI ethical guidelines, this shows why society has to define and discuss discrimination. Kim Borami, an attorney at the law firm "Dike," explained in a Seoul Shinmun article, "The development of technology that hasn’t considered societal values can actually regress the societal system. [...] Extreme racial discriminatory measures or gender discriminatory measures in workplaces may be accepted as natural byproducts of technology that cannot even be questioned. Under the guise of artificial intelligence, our lives may easily become accustomed to discriminatory measures or unjust realities may be enforced." Since the incident involving "Lee Luda," discussions on these matters have become more active in Korean society, with academia advocating for the establishment of criteria to address unethical AI through legal reforms such as anti-discrimination laws.


Unfortunately, as of today, anti-discrimination laws are still in a state of limbo. According to news from Hankyoreh, as of August last year, out of the 12 artificial intelligence bills submitted to the National Assembly, most were related to the promotion of AI research and development, with none focusing on anti-discrimination or gender equality perspectives.



 
  • English Translation: 지니

  • Translation review: Juyeon

  • Writer of the original text: Miguel

  • Review and amendments to the original text: 에스텔

  • Web & SNS Posting: Miguel

  • News Card Design: 가리


References (available in Korean)


8 views0 comments

Comments


bottom of page