It looks like the digital infatuation with artificial intelligence has cooled off, as people are either quitting AI chatbots entirely or have stopped sharing their personal information with them.Â
According to a new report from Malwarebytes, the so-called fascination with chatbots is slowly but steadily being replaced by a more aware and proactive public that isn’t just concerned about privacy, but is also taking action to address these concerns.Â
Is the AI spark gone in 2026?
In a survey conducted by Malwarebytes, 90% of respondents said they’re worried about AI (in any form) using their data without their consent, and as many as 88% do not freely share personal information with ChatGPT or Gemini.Â
A staggering 84% respondents didn’t share their personal health information with these tools, which is quite surprising if you ask me, because I know at least five people who have submitted their recent health checkup reports to either ChatGPT or Gemini and asked for general help or guidance.Â
But here’s the most interesting bit: 43% and 42% of the survey participants have stopped using ChatGPT and Gemini, respectively. That’s a considerable number.Â
Though I am not among the pool of users, as I still rely on these AI tools for either summarizing a 100-page document or visualizing something based on text commands, OpenAI and Google should both take into account the numbers and the rising concerns among the general public about using chatbots.

Can the user-AI relationship be saved through better privacy?
Respondents are already taking measures to protect their digital footprint or their data from artificial intelligence. The survey report mentions that 44% have stopped using Instagram, and 37% aren’t using Facebook anymore.
It doesn’t mention people being scared of Meta AI using their photos, videos, or chats for training and improvement, but there could be a plausible connection there.Â
On the brighter side, 82% of respondents are opting out of data collection wherever possible, 71% use an ad blocker, and 46% use a VPN. A growing number of users are increasingly concerned about privacy policies across the platforms they use, entering fake or dummy data when possible, or using a personal data removal service.
“The research reveals that many people are unsure of exactly how AI is being used for their benefit and the privacy implications, which lead to distrust and confusion,” mentions the survey report.Â
Read the full article here