Article

Five Ways Human Psychology Shapes AI—for Better or Worse

Published on October 31, 2024
Contributors

Is AI the Modern-Day Frankenstein’s Monster?

We’re at a tipping point. For years, artificial intelligence has woven itself seamlessly into our lives — answering questions, making plans, even delivering health interventions. Yet as AI systems grow increasingly sophisticated, many of us find ourselves caught between awe and a lingering unease. Are we simply inventing helpful tools, or are we on the verge of creating something more complex, perhaps even beyond our control?

This ambivalence echoes an age-old story: the tale of a creator who, in his pursuit of innovation, fashions a being he is ill-prepared to manage. The metaphor of Frankenstein’s monster feels increasingly relevant as the capabilities of AI quickly evolve. We celebrate AI as a marvel of human ingenuity, yet we sense it carries implications we may not fully understand. This duality—both marvel and fear—mirrors a profound truth about technological progress: advancement without reflection may bring with it unforeseen consequences.

This piece explores the ways our psychological underpinnings — like the tendency to anthropomorphize, unchecked optimism, present bias, ingroup/outgroup divides, and trust through understanding — shape our relationship with AI, paralleling Victor Frankenstein’s relationship with his creation. By examining our very-human relationship with artificial intelligence, we can navigate its development with a more grounded awareness of both its promise and risks.

Seeing humanity everywhere

As artificial intelligence becomes more adept at mimicking human responses, our reaction to it grows more complex, complicated by a deep-seated desire to relate to it as though it were human. Anthropomorphism, or the tendency to project human characteristics onto non-human entities, shapes how we relate to the world around us. Whether we’re interacting with a pet, a car, or an electronic device, we are inclined to imbue non-human objects with human emotions, motivations, and intentions.

In the context of AI, this instinct becomes particularly potent. When a chatbot or virtual assistant seems to “understand” us, it becomes more than just a program; it feels like a presence, someone capable of empathizing, responding, even “knowing” us. This is especially strong with technologies that provide personalized responses, as people are more likely to engage meaningfully with a system that more closely resembles human interactions – even if they logically understand that they’re interacting with a machine.

Anthropomorphizing is not confined to advanced AI. Even simple digital companions can inspire strong emotional attachments. Virtual pets [1], for example, evoke genuine care and affection from users, even when they know the pet is nothing more than pixels on a screen. This isn’t merely a novelty but a testament to our desire for connection, especially when we believe we have a role in the “well-being” of these virtual beings. As our AI interactions grow richer, these feelings deepen, establishing relationships that, if unexamined, can skew our sense of reality.

The implications of this go beyond comfort or engagement; anthropomorphizing AI has a curious way of clouding our judgment. When we view AI as “alive” or “aware,” we risk misinterpreting its capabilities and intentions. An AI can convincingly respond with scripted empathy, but this empathy is only a programmed output, a calculated response devoid of actual sentiment or understanding. Yet, once we sense human qualities in AI, it becomes easier to place trust in its recommendations and solutions, a trust that might be misplaced.

There’s also a deeper layer to why we anthropomorphize: it helps us create a sense of order in a world that often feels chaotic. This drive to understand and control our environment, effectance motivation [2] — is one cause of our tendency to anthropomorphize. When we assign human qualities to AI, its unpredictability feels more familiar and understandable.

Our inclination to anthropomorphize may ultimately serve us, making technology more relatable, but it also risks blinding us to AI’s limitations. A virtual assistant or a chatbot can provide us with support and structure, but it cannot replace the depth of human connection. The challenge, then, is to recognize AI for what it is—a remarkable tool shaped by human ingenuity, yet lacking the sentience we instinctively project onto it.

Optimism bias: focusing on AI’s positive potential, not the risks

In Mary Shelley’s Frankenstein, Dr. Victor Frankenstein hand-picked, assembled and stitched together what he thought to be the most elegant fragments of corpses. His excitement overtook him as he tirelessly worked to bring death to life. When reality set in and he realized what he’d done, it was too late.

In our vision of the future, AI often takes the starring role as a unilateral force for good: solving problems, curing disease, and creating a brighter, more connected world. This aspirational view is rooted in optimism bias [3], our natural tendency to favor the positive potential of innovations while minimizing or even overlooking the downsides. Optimism bias can shape decision-making in ways that encourage us to focus on ideal outcomes, ignoring inconvenient truths. When it comes to AI, this bias can have profound implications, as we may unwittingly downplay the technology’s complexities, assuming that issues will resolve themselves as advancements continue.

Indeed, optimism bias can distract from critical judgment. One need only look at the early days of social media, when platforms promised a world of global connection and information sharing, but instead became breeding grounds for misinformation and polarization. It is easy to see how the development of AI could mirror such grand yet unfulfilled promises.

While optimism about AI is not inherently negative, it does underscore the need for balance. The belief that “technology will save us” can obscure the reality that technological advancements come with trade-offs. By acknowledging the role of optimism bias, we might better navigate the promises of AI with greater sobriety.

Present bias: chasing immediate gains, ignoring long-term consequences

When he relentlessly pursued his goal of bringing the dead to life, Dr. Frankenstein only considered his immediate goal. Failing to consider the consequences of his actions, he narrowed in on his current obsession and was blind to the future.

In the pursuit of rapid progress and under the pressure of competition, AI developers (designers, engineers, and product managers alike) can too fall prey to present bias [4]—the tendency to prioritize immediate concerns over long-term consequences, even when they come at the expense of future stability. Present bias can lead decision-makers to focus on immediate breakthroughs in AI, like improving efficiency or enhancing user experience, without considering the potential future impact of widespread AI integration. While short-term gains may be enticing, their true cost becomes evident only over time. To mitigate present bias in AI, we need to cultivate a habit of foresight, valuing long-term goals as highly as immediate wins.

Ingroup/outgroup bias: the AI enthusiasts vs. the skeptics

Despite his attempts to assimilate into society and gain acceptance from his human counterparts, Frankenstein’s creation was rejected as a result of his terrifying otherness. He became an outcast, the ultimate outgroup of one. Over time, he grew embittered, realizing he could never be part of the human ingroup.

In the modern world of AI, perspectives are divided into two main camps of enthusiasts and skeptics. This divide can be exacerbated by ingroup/outgroup bias (or ingroup favoritism), how we favor those who share our beliefs while discounting or distrusting the views of those who don’t. Enthusiasts may see AI as an unmitigated force for progress, while skeptics voice concerns about privacy, bias, and responsibility. And because communications from enthusiasts tend to resonate with other enthusiasts but not skeptics, for example (and vice versa), a significant barrier stands in the way of bridging the divide.

As a result, ingroup favoritism leads to diminished intergroup cooperation [5]. For skeptics and enthusiasts alike, this means less learning from others, and a weaker collective ability to address important issues. To foster collaboration, inclusive dialogues and development teams that represent diverse perspectives are essential.

Trust through transparency in AI

The switch from excitement and wonder to disgust and repulsion was startlingly swift for Dr. Frankenstein. Although the monster was his own creation, Victor didn’t recognize what stood before him. He didn’t understand his creation, nor even attempt to do so – and thus could not put trust in it.

Trust (or lack of trust) in AI systems is a major factor that influences whether people are accepting of the technology, and Carey Morewedge’s research offers a fresh perspective to explain why [6]. He finds that we tend to overestimate our understanding of human decision-making while accurately recognizing how much we don’t know when it comes to AI. This gap in perception can lead to skepticism around AI, not because it’s less reliable, but because it feels like a “black box”—an opaque system we know we don’t fully grasp.

Morewedge’s findings suggest that transparency is key to gaining trust in artificial intelligence. By clarifying how AI systems make decisions, especially in areas like healthcare or finance, developers, designers, engineers, and product managers can demystify AI’s processes and help users feel more in control. Transparency doesn’t just make AI seem less “other” or alien; it also taps into a basic human need to make sense of our world. The more we understand the rationale behind an AI’s decision, the more we trust it as a helpful and safe tool, rather than something veiled in mystery.

To coexist with the monster, we must understand it

As AI continues to advance and embed itself in the fabric of our lives, understanding our psychological responses to it becomes increasingly essential. The biases we bring to our interactions with AI—our tendency to anthropomorphize, the pull of optimism bias, the immediacy of present bias, the influence of ingroup favoritism, and trust through understanding—shape not only how we view this technology but also how we wield it. Each facet of our humanity invites us to reflect on what it means to create tools in our own image, tools we may come to depend on, even trust, as they grow ever more sophisticated.

Yet AI, for all its complexity, is ultimately a reflection of us — our ingenuity, our ambitions, our flaws. Like Frankenstein’s creature, it embodies both the marvels and the risks of human innovation. As we forge ahead, it’s crucial to approach AI with awareness, grounded in transparency and tempered by humility. Recognizing our biases allows us to maintain a balanced perspective, one that celebrates AI’s potential while staying vigilant about its limits and ethical implications. Only by embracing this duality can we ensure that our creations serve us meaningfully, rather than lead us into a future we’re unprepared to face.

In navigating this era of rapid technological change, perhaps our greatest task is not just to ask, “What can AI do?” but “What should it do?” This question—simple yet profound—urges us to consider the broader implications of our ambitions, ensuring that our creations remain tools that serve as companions rather than unintended monsters.

References
  1. Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological Review, 114(4), 864.

  2. Virtual Pets Motivate & Improve Outcomes, Pattern Health blog, by Aline Holzwarth (2021)

  3. Waytz, A., Morewedge, C. K., Epley, N., Monteleone, G., Gao, J. H., & Cacioppo, J. T. (2010). Making sense by making sentient: effectance motivation increases anthropomorphism. Journal of Personality and Social Psychology, 99(3), 410.

  4. Sharot, T. (2011). The optimism bias. Current Biology, 21(23), R941-R945.

  5. O'Donoghue, T., & Rabin, M. (2015). Present bias: Lessons learned and to be learned. American Economic Review, 105(5), 273-279.

  6. Balliet, D., Wu, J., & De Dreu, C. K. (2014). Ingroup favoritism in cooperation: a meta-analysis. Psychological bulletin, 140(6), 1556.

  7. Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature human behaviour, 5(12), 1636-1642.

Share