A recent survey conducted by South African (SA) computer security service, KnowBe4 AFRICA, has shown that many people often share personal information with generative AI tools, sparking concerns from tech experts over potential risks and the need for enhanced user education.
Spanning 1,300 respondents across ten African and Middle Eastern countries including SA, Botswana, Nigeria, Ghana, Kenya, Egypt, Mauritius, the UAE and Saudi Arabia, the survey found that 63% of users are open to sharing their personal data with generative AI tools like ChatGPT. Eighty three percent of respondents expressed confidence in the accuracy and reliability of these AI technologies.
SVP Content Strategy and Evangelist at KnowBe4 AFRICA, Anna Collard, who also worked on the report, stressed the importance of increased user training and awareness regarding the potential risks associated with generative AI technology. Despite the evident utility of such tools, Collard noted that there’s a need for caution and critical thinking to limit risks effectively.
The survey highlighted a widespread adoption of generative AI tools across personal and professional spheres, with users using these technologies for research, information gathering, composing emails, creative content generation, and drafting documents. The benefits cited by respondents included time-saving, assistance with complex tasks, increased productivity, and enhanced creativity.
However, concerns were raised regarding the potential impact on job security and human creativity. While 80% of respondents didn’t feel threatened by generative AI in terms of job security, 57% believed it had the potential to replace human creativity.
Of particular concern was the level of comfort in sharing sensitive data with generative AI tools, varying across countries. While 83% of users expressed confidence in the accuracy of generative AI, Collard cautioned against blind trust, urging individuals to cultivate critical thinking skills.
Furthermore, the survey revealed a lack of comprehensive policies in organisations to address challenges associated with generative AI. Nearly half of respondents reported a lack of generative AI policies at work, highlighting the need for responsible and safe usage guidelines.
Deepfakes emerged as one of the most concerning uses of AI, especially since malicious actors could exploit this technology to scam unsuspecting people and further disinformation campaigns. Collard adviced that AI users shouldn’t fully trust the technology, and also urged companies to provide training and implement robust policies to navigate the complexities of generative AI responsibly.
Source: APO Group
By Derrick Kafui Deti – Digital Economy Magazine