An overwhelming number of respondents familiar with ChatGPT were concerned about the risks it poses to security and safety, according to Malwarebytes.
They also don’t trust the information it produces, and would like to see a pause in development so that regulation can catch up. What remains to be seen is whether this is simply a singular moment of anxiety or a trend that will persist.
The uncertainty around machine learning models
The uncertainty around how ChatGPT will change our lives, and whether it will take our jobs, is compounded by the mysterious way in which it works. It is an unknown quantity to everyone, even its creators. Machine learning models like ChatGPT are “black boxes” with emergent properties that appear suddenly and unexpectedly as the amount of computing power used to create them increases.
Real-world emergent properties have included the ability to perform arithmetic, take college-level exams, and identify the intended meaning of words. The ability to perform these tasks could not be predicted from smaller models, and today’s models cannot be used to predict what the next generation of larger models will be capable of.
That leaves us facing a very uncertain future, both individually and collectively. The continuum of view points held by serious commentators ranges—quite literally—from those who think AI is an existential risk to those who think it will save the world.
ChatGPT trust and accuracy concerns
“An AI revolution has been gathering pace for a very long time, and many specific, narrow applications have been enormously successful without stirring this kind of mistrust,” said Mark Stockley, Cybersecurity Evangelist at Malwarebytes.
“At Malwarebytes, Machine Learning and AI have been used for years to help improve efficiency, to identify malware and improve the overall performance of many technologies. However, public sentiment on ChatGPT is a different beast and the uncertainty around how ChatGPT will change our lives is compounded by the mysterious ways in which it works,” added Stockley.
The survey findings indicate that ChatGPT has a trust issue. Only 10% surveyed agreed with the following statement, “I trust the information produced by ChatGPT,” while 63% disagreed.
Similar sentiment was held among respondents about accuracy with only 12% agreeing with the statement, “the information produced by ChatGPT is accurate,” while 55 disagreed.
Beyond concerns around trust and accuracy, a resounding 81% of respondents believed ChatGPT could be a possible safety or security risk with 52% of respondents calling for a pause on ChatGPT work for regulations to catch up – echoing similar tech luminary concerns voiced earlier this year.
Despite the avalanche of ChatGPT media coverage and online chatter, only 35% of respondents agreed with the statement “I am familiar with ChatGPT,” significantly less than the 50% that disagreed.