The fight of society to address AI's involvement in online exploitation
The emergence of artificial intelligence (AI) has had a significant impact on a variety of industries. Visionary figures such as John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon from the United States were among the first to conceive of AI as a means of constructing intelligent systems that could automate tasks and assist human beings.
The growing complexity and advanced capabilities of AI systems, as exemplified by Google's chatbot (Google Assistant), Open AI's ChatGPT, and Samsung's chatbot BARD (Be All-round Assistant Robot), have given rise to concerns about data privacy and the potential for AI to be misused.
According to Dr. Japie Greeff of North West University, Microsoft's investment in OpenAI and the user-friendly nature of ChatGPT's information retrieval capabilities are intended to challenge Google's market dominance in the field of search engines.
Nevertheless, the swift progress of AI technology has raised ethical questions that must be addressed.
Greeff stressed the significance of developing policies that can assess ethical and appropriate methods of deploying and regulating the technology.
An open letter from the Future of Life Institute, signed by tech luminaries including Elon Musk and Steve Wozniak, has urged a temporary halt to the development of AI due to the potential risks posed to society and humanity, as AI programs become increasingly potent and difficult to comprehend or manage.
Greeff has noted that the emergence of large language models and generative AI, such as Midjourney's text-to-image generator and Synthesia's text-to-video generator, as well as other synthetic media generators like deepfakes and other sound, image, and video editing tools, presents both risks and benefits. In his view, the most significant risk associated with these tools is their capacity to generate fake news. Furthermore, the impact of automation and AI on employment must not be overlooked.
Rianette Leibowitz, a cyber-wellness and safety expert and founder of SafetyNet, has stated that cyber activists and AI ethicists have expressed worries about the potential misuse of the technology, including plagiarism and misinformation.
Leibowitz added that there were AI engine systems that had become hazardous when manipulated by humans, as evidenced by the creation of deepfake content, fake news, Not Safe For Work (NSFW) chatbots, and AI-edited photos.
Leibowitz emphasized that if people fail to recognize and scrutinize the authenticity of a message, it could result in devastating consequences. It is essential to bear in mind that AI, like any other technology, is operated by humans. As she put it, "While AI creates potential risks, it can also be used to detect risks." It all boils down to who is utilizing the technology, as with most technologies.
Leibowitz pointed out that given the existence of illicit content on the internet, the use of AI-generated searches or systems could escalate the incidence of crimes associated with such acts.
Greeff observed that the extent of mass manipulation displayed on social media, including sizeable bot-driven campaigns and massive data harvesting by numerous organizations, had demonstrated the danger posed by the targeted spread of fake news to the social fabric of communities.
Greeff stated that he does not believe that all development should be permitted to proceed unchecked. In fact, the more advanced we become in our technology, the more critical it is to invest in establishing policies that assess the ethical and appropriate methods for deploying and regulating that technology.
Greeff emphasized that cybersecurity is a crucial area that should be invested in by the government, academia, and industry since it is a persistent risk. The emergence of the current array of AI tools only amplifies the risk that has existed for a while.
Greeff and Leibowitz both expressed the importance of balancing the benefits and drawbacks of sharing private data with organizations and not limiting access to the internet in addressing the risks associated with AI development.
Greeff suggested that the government should invest in and conduct research in these areas to ensure that the country remains competitive in the technology race.
Comments
Post a Comment