AI Perception Isn’t Just Technical, It’s Personal
Published on April 29, 2025
Contributors
TL;DR: Some people see AI as a helpful tool. Others see it as a threat to jobs, creativity, or even humanity. But new research suggests our feelings about AI aren’t just about what it does, but about who we are. Personality, familiarity, and affinity with tech all shape whether we trust or fear it. And while AI may offer real promise in areas like mental health and care, building trust will be key. If we want AI to help people, we need to treat it not just as a technical challenge – but as a human one.
Why some trust AI… while others think it’s the beginning of the end
I’ve never really been a science fiction fan myself. The whole narrative of ‘robots taking over the world’ and some sort of AI-apocalypse just never clicked for me. (Same with aliens. Why are they always evil? Maybe they’re just out there, minding their own business – maybe even friendly. But I guess that wouldn’t make for a very good movie, would it?). There is even evidence that science fiction – with its emotionally charged autonomous AI take-overs and what not – predicts fearful attitudes toward AI [1].
But as AI visionary Andrew Ng put it: worrying about evil AI taking over the world is kind of like worrying about overcrowding on Mars. Nothing to lose sleep over at this point.
AI taking over our jobs sounds more plausible though. Not all jobs, but maybe some. Can you imagine that switchboard operators used to connect phone calls manually? Now that’s a great example of a repetitive task made way more efficient with technological innovation. And as some roles will disappear over time, new ones always pop up.
AI tools have come a long way. It’s not just an autofill helping you finish your sentences anymore. They’re now capable of creating designs, illustrations, and stylized profile pictures. (Yes, I am talking about that recent hype of everyone turning themselves into their favorite cartoon character). And it’s starting to show up in much more personal places too. Therapy apps. Companionship bots. Even AI friends. While some people are over the moon excited, others are genuinely worried. Maybe our reactions to AI aren’t just about tech and specs – but about who we are, how we feel, and what we trust.
Sometimes I feel like I’m in an AI bubble. I hear people say things like “What a crazy and exciting time we’re living in” about 5 times a day (ahum, Samuel Salzer) while others seem to believe the AI-apocalypse is closer than we think. But outside of work, there are many people I know who aren’t talking about AI at all.
Research to the rescue: Attitudes aren’t that extreme
Recent research suggests that public opinion about AI isn’t nearly as extreme as the loudest voices on the internet might lead us to believe [2]. In a recent study by Guingrich & Graziano, (2025)* participants interacted with chatbots and then shared their thoughts about AI’s role in their lives and society. Surprisingly (to some), most people weren’t freaking out about AI taking over the world. But they also weren’t quite ready to welcome AI into the moral or emotional side of life either.
People tended to feel that AI could have a positive impact on their personal lives – and to some extent, on society more broadly. They were interested in talking to an AI, though they still preferred chatting with a human. And while strong doomsday fears (also referred to as ‘p(doom)’) weren’t common, optimism wasn’t universal either.
One key take-away from this research is that our views on AI vary but are generally not so extreme. More interestingly, how people felt about AI seemed to be related to personality. For example, people who were high on agreeableness, socially healthy, or low on neuroticism tended to view AI more positively. Familiarity helped too: the more familiar or the higher affinity someone had for technology, the more optimistic they tended to be about the role of AI in their life.
*Even though the Guingrich & Graziano paper was published in 2025, the study itself was run in 2023. Since AI tools – especially generative ones – have evolved rapidly in that time, the findings might feel like a bit of a time capsule. Still, they offer valuable insights into early public sentiment.
Curious but not convinced
Another interesting finding from the Guingrich & Graziano study was that talking to a chatbot didn’t actually change people’s opinions about AI.
Participants in the study had a short interaction with a chatbot before answering questions about their personality traits, tech familiarity, and attitudes toward AI (among other things). You might expect that chatting with an AI – even briefly – would make people feel more comfortable with the technology, or maybe spark more trust. But across the 12 different variables they tested, the only noticeable effect was that people who’d just used a chatbot were less interested in talking to it any time soon again. Maybe the novelty wore off. Maybe the curiosity box was ticked. Maybe they didn’t know what else to ask the bot. Either way, it didn’t shift their core beliefs, and they preferred to talk to a real person next.
Where do we draw the line?
Perception of AI also depends on the use case. According to De Freitas and colleagues (2023), we’re more comfortable with AI in objective, measurable domains (like managing finances) than in subjective or emotional domains (like dating) [3].
Think about it: Would you trust an AI to help you decide which stocks to invest in? Probably. But would you let it pick your next date? If you’re very into personality tests, maybe, but most people still seem uncomfortable with it. Emotional decisions require trust and empathy – traits we don’t easily assign to machines.
An emotional paradox
Here’s where things get a little tangled. On one hand, there is more and more research about how AI can help improve mental health, reduce loneliness, and offer social support. Think of AI chatbots like Replika which are marketed as supportive, emotionally intelligent companions – an always-available, artificial friend. For some, such tools are helpful and comforting.
But according to the Guingrich & Graziano paper, loneliness was significantly correlated with ‘p(doom)’, meaning that the more lonely someone feels, the more negative their attitude is toward AI. How can we explain this contradiction?
Maybe AI companionship is still niche – useful for some, but not yet something the general population relates to [2]. Another possibility is that people may make a distinction between “AI I use” and “AI out in the world”. Some people may appreciate a daily check in with their artificial friend, but still feel uneasy about AI being used in high-stakes decisions in areas like government or public health, especially when decisions carry moral weight or require human judgment.
What this means for digital health
People are generally open to AI tools that support diagnostics, predict outcomes, or process large amounts of data. These roles feel rational and safe – it’s the machines doing what they do best. But when it comes to emotional tasks, therapy bots, or AI coaches, hesitation may creep in.
And yet, it’s precisely in those areas where AI tools may provide real value. With long waitlists, limited access to psychologists, and increasing demand for mental health support, AI could offer a kind of ‘in-between’ option – something to bridge the gap between sessions, or to support people who might otherwise not get help at all. It's not a replacement for human care, but it might be a meaningful supplement when the alternative is nothing.
It’s not just tech, it’s psychology
At the end of the day, our feelings (be it excitement or fears) about AI aren’t just about what it does, but how it fits into our lives – socially, emotionally, and psychologically.
Some people are excited. Others are uneasy. Most of us are still figuring it out. And that’s normal. In fact, it’s quite in line with how we’ve responded to transformative technologies in the past. At first, there’s curiosity, hesitation, and debate. Then, gradually, trust and norms begin to form – shaped not just by what the technology can do, but by how we feel about using it.
If we want AI to have a meaningful impact on society, or domains like mental health or social support, we can’t treat it only as a technical challenge. It’s also a human one.
We need to pay attention not just to what the tools can do, but how people feel about using them. It’s about building something people want to use, feel comfortable with, and see as genuinely helpful. The goal isn’t to make AI feel human – it’s to make it work for humans.
References
1. Liang, Y., & Lee, S. A. (2017). Fear of autonomous robots and artificial intelligence: Evidence from national representative data with probability sampling. International Journal of Social Robotics, 9, 379-384.
2. Guingrich, R. E., & Graziano, M. S. (2025). P (doom) versus AI optimism: attitudes toward artificial intelligence and the factors that shape them. Journal of Technology in Behavioral Science, 1-19.
3. De Freitas, J., Agarwal, S., Schmitt, B., & Haslam, N. (2023). Psychological factors underlying attitudes toward AI tools. Nature Human Behaviour, 7 (11), 1845-1854.