The American company OpenAI, which is behind the creation and continuous development of the artificial intelligence program ChatGPT, states in a post that the fourth version of the program, GPT-4, will soon be able to assume the role of moderator on websites, online forum and social networking platforms.
The coordination of the sites, forums and social media is done by their employees who must check the content that is published and whether it is in accordance with the operating principles of the medium in which the moderator works and with the existing local legislation.
However, both the huge volume of posts and the fact that companies have an increasingly reduced number of coordination workers do not allow for a serious and thorough control of the posts, with the result that all kinds of content (texts, comments, photos, videos, etc.) circulate on the Internet ) which for various reasons should never have seen the light of day.
It is this gap that OpenAI aspires to fill, as it claims, by turning ChatGPT into a moderator that will quickly do the necessary processing on the published posts of an internet medium and find out which ones are legal with the policy of the medium as well as the law in order to allow or prevent their publication.
OpenAI talks about a tool that will make the lives of moderators and companies easier by solving the issue of “problematic” posts and will work to the benefit of companies but also of online and social peace in general. On the other hand, the obvious is initially raised issue of the jobs ie moderators that ChatGPT will cover.
On a second level, the question arises as to whether the program will control the posted content by strictly applying the protocols and legal frameworks that have been brought to its attention, or whether it will gradually start to take initiatives itself and allow or not a post based on its own criteria.



