Our Marketing Strategies Must Meet the Challenges of Algorithmophobia (Fear of AI)

Shiny Black Cyborg Artificial Intelligence Cyberpunk Bust Sci Fi AI Robot Reflection 3d illustration render

In the smart building industry, we typically blame tight budgets, changing labor markets, low occupancy rates, and lack of education for consumer resistance to automation. These are important factors to be sure. However, as a seasoned marketer, I know most folks don’t make their purchasing decisions solely on affordability or even their ability to pay. They make them based on feelings, and right now the collective feeling toward automation, and AI specifically, seems to be a mixed bag of extremes. 

Those consumers excited about AI are really, really jacked (ChatGPT is our road to the Singularity), while skeptics are really, really anxious (Hey, The Terminator is at the door asking for Sarah?). Meanwhile, the bulk of us are waiting for something to happen…

From a marketing perspective, this is far from ideal. A large group of ambivalent consumers with a “wait and see” attitude is hard to move one way or the other. Of course, it doesn’t help allay anxieties that some tech companies claim to use AI when they don’t, or that or some owners have been burned by bloated promises and anemic results.

False promises and over-speculation are the most pervasive at the twilight of every tech revolution—when the least is known by the most—and let’s be honest, most of us don’t really understand AI very well. Hell, even AI creators admit they don’t really understand how it works. So, it’s no surprise the general sentiment out there is one of distrust and confusion.

To be clear, I’m not claiming that all resistant building owners suffer from algorithmophobia (fear of AI), or that financial issues and education gaps don’t contribute to feet dragging. What I am suggesting is that a collective feeling of technophobia and ambivalence is one factor that, if managed, could move the needle a bit on adoption rates. While I’m not outlining failproof solutions in this article, I would like to provide some meditations on consumer psychology and AI itself. Hopefully you stumble across something valuable to inform your messaging, campaigns, and sales pitches.

Most Consumers Feel Ambivalent Towards AI

When it comes to AI, most consumers just don’t know how to feel. Popular culture reflects this ambivalence. You see it in films, books, TV series that consistently portray AI as either subservient companions with prime directives to “protect and serve”, or as soulless overlords hell-bent on eradicating humans. Often, AI characters move from one extreme to the other within the same storyline (HAL 9000 from 2001: A Space Odyssey epitomizes this malevolent character arc). These characters and stories reflect our vacillation between ecstatic jubilation and cowering fear.

Anxieties around machines and AI have been around since the Industrial Revolution. We can see it in popular culture of the early 20th century, like in Chaplin’s Modern Times.

So, what’s at the heart of this emotional waffling? One answer may be that we’re taught to fear technology from sources like these. Another potential source is more primal. For example, many of us have some level of fear of human-like figures (automatonophobia). It’s why we feel uneasy when dolls, mannequins, or paintings that “stare” at us. Horror movies consistently leverage this phobia to get us squirming in our seats (Chucky, Annabelle, M3GAN). Even Boston Dynamics videos of robots performing human feats leave us both enthralled and a bit horrified. Most people I introduce to ChatGPT say the same thing: “That’s scary shit!” and in the next breath, “I can’t wait to use it!”

Still, another likely cause for our hesitant attitudes may be more philosophical in nature. Artificial intelligence presents us with a unique cognitive dissonance. AI is simultaneously human and machine. Animate and inanimate. Alive and not alive. These two facts wrestle in our minds, but the conflict is hard to resolve. We hold many cognitive models for each of these categories, but fewer that contain both. And some of the few are usually disturbing (e.g., a dead body).

This worry around machines and AI came about during the Industrial Revolution, and you can see it in the impersonating Maschinenmensch of Thea von Harbou’s novel Metropolis (1925) or the alienating factory machines of Modern Times (1936). Even fears around alien invasions seem somewhat connected to this ambivalence towards AI. Aren’t E.T. and Predator just different versions of WALL-E and the Terminator?   

Machine learning concept with robot think with binary code on black background

AI is a Tool for Automating Thinking

Maybe it seems obvious to some that AI is a tool, but for many others it’s not. In fact, I think most people tend to view AI as human-first, tool second, or at least they focus on the “intelligence” part much more than the machine (“machine in the ghost”?). After all, it is the intelligence part that makes AI both attractive and scary, and it’s because AI is “human” in a sense that we distrust it. We know ourselves well.

Contrast this attitude with the arrival of the automobile, which people saw first as a transformative tool for travel. To be sure, there were legitimate worries about car crashes early on, but no one feared the Model T was going to become sentient and take over!

What the automobile did was automate mobility. Why? Because walking sucks. In that same spirit, AI simply automates thinking, something else we hate doing. Remembering things takes effort. Reasoning is hard. Paying attention? Forget about it. But even for those lovers of contemplation, consistency in thinking is hard to come by. There are just so many things we can attend to, so many hours we can sit and observe. Boredom and fatigue force mistakes, inaccuracies, and omissions. Even at optimal performance, most of us are adequate thinkers at best.  

Yet, despite our imperfections, humans are capable of a complex array of cognitive tasks like perception, spatial cognition, memory, symbol use (language), reasoning, learning, and innovating. It is these facets of our thinking that we wish AI to automate. Eventually, we will collect these faculties and others into a general intelligence that imitates full human cognition.

Artificial intelligence concept of big data or cyber security. 3D illustration

How Should We “Dehumanize” AI

But for the moment, AI is indeed a tool for humans, and any marketing message designed to quell fears would seek to minimize the human side and to promote it as a tool first and foremost. But how should be go about this “dehumanizing” of AI? And can we go too far? Too much personification may prod anxieties, too little may erase the mystique. I propose we find a balance. By all means, highlight the features of AI, but ensure your descriptions put the customer and their teams as the ultimate beneficiaries of this tool. To illustrate this, consider these three examples of real-world marketing copy from three security camera companies. Each takes a different approach.

Copy 1

Unlike a human security guard, electronic surveillance doesn’t need a tea break. It won’t fall asleep, is never distracted, and never forgets what it saw. It’s this 24/7 on the job monitoring, and recording, that makes it the best deterrent against vandals, thieves, or shoplifters.

Copy 1’s attempt to dehumanize AI by beginning with “Unlike a human security guard” only backfires at the effort. In fact, the list of things security cameras won’t do (e.g., “fall asleep”) ironically serves to humanize the product even more. Ultimately, the camera is simply a better security guard.

Copy 2

AI security cameras are specialized network IP cameras that perform advanced analytical functions such as vehicle detection, person detection, face detection, traffic counting, people counting, and license plate recognition (LPR). These artificial intelligence functions are achieved using highly advanced video analytics software that is built-in to the camera and recorder they are connected to.

Copy 2 completely removes any notion of “intelligence” by burying it under a sea of technical jargon that’s often off-putting to consumers new to the market.

Copy 3

Adding more staff to monitor more surveillance cameras has been the only feasible way to improve real-time outcomes. With advances in artificial intelligence (AI) technology, cameras can be trained to detect unusual events and behavior, to signal a potential security issue. This puts your team in front of a problem, instead of reacting to an event that’s already occurred. The best part is AI continuously learns about each environment, so it gets smarter over time without the need for continuously defining or modifying rules. Even a small team sees their prevention rates improve and security breaches decrease.

Copy 3 strikes a better balance. The product does human things (e.g., “be trained”, “learns”, “gets smarter”) but the list of features never overshadows the fact that it’s ultimately a tool for improving the work of “staff” or a “small team”. Note: the passive voice structure of the sentences describing the camera even puts AI in the objective position. That is the thing being acted upon rather than the thing performing the action.

Incidentally, I asked ChatGPT the question: “How can I dehumanize artificial intelligence?” Its answer was interesting:

“As an AI language model, I am programmed to provide helpful responses and not to engage in activities that dehumanize AI or any other entity. It is important to remember that AI is a tool created by humans to assist in various tasks and should be treated with respect and responsibility. While AI may not have emotions or consciousness like humans do, it is important to recognize the potential impact that AI can have on society and to ensure that it is developed and used in a responsible and ethical manner.”

Note how it equates AI to humanity in its refusal to dehumanize itself, but it concedes it’s not human. Instead, it seems to be arguing the value it brings to humans makes dehumanizing it inappropriate.  

Concept of a cosmopolitan population with different silhouettes of men's and women's heads in colors and profile views.

Humans are (for the moment) Still Unique

While AI has made significant advancements in recent years and can perform many tasks at a level that rivals or exceeds human performance, there are still several things that humans can do that AI can’t. Of course, AI can’t experience emotions or empathy in way we do, and even the most sophisticated robots still lack the fine motor skills and complex movements required for challenging environments.

However, another important cognitive advantage humans have over AI is the ability to deal with and understand our own ignorance. Computer science professor and AI expert, Stuart Russell, explains our uniqueness:

“The problem with the way we build AI systems now is we give them a fixed objective. The algorithms require us to specify everything in the objective…And the reason that we don’t have to do that with humans is that humans often know that they don’t know all the things that we care about. If you ask a human to get you a cup of coffee, and you happen to be in the Hotel George Sand in Paris, where the coffee is 13 euros a cup, it’s entirely reasonable to come back and say, well, it’s 13 euros, are you sure you want it, or I could go next door and get one? We don’t think of this as a terribly sophisticated capability, but AI systems don’t have it because the way we build them now, they have to know the full objective.”

Even when presented with incomplete, ambiguous, or even contradictory information, humans can draw on our experience, intuition, and creativity to come up with innovative solutions to problems. In contrast, when presented with incomplete rules, AI finds it difficult or impossible to continue. Our “algorithm” doesn’t return an error when something doesn’t make sense. Instead, we seek a different perspective on the problem.

Woman Preparing Meal At Home Asking Digital Assistant Question

Much of Our Daily Lives Already Runs on AI

Even though it feels like AI has “just arrived” with frameworks like ChatGPT, it already exists in many parts of our daily lives. A short list of current applications illustrates this.

Virtual assistants like Siri, Google Assistant, and Alexa are AI-powered programs that help millions of people around the world set reminders, make calls, and answer questions. Algorithms used by streaming services like Netflix and Amazon create the recommendation systems that tailor content to a viewer’s preferences and watch history. Chatbots answer our questions when offices are closed.

AI technology is used by our banks to detect fraudulent transactions and suspicious activity. It’s helping our doctors and medical facilities develop new treatments and improve patient outcomes. And now that AI has made it into our cars, it’s part of our daily commute, road trips, and grocery shopping.

Yet, none of these familiar and ubiquitous applications seems to ease anxieties or dispel the notion that we’re just starting down the AI road. Much of this is an education issue, but, again, there also seems to be a psychological component. As one futurist rightly pointed out: “It’s ok if computers land our planes safely, but we get all emotional when they beat us at chess.”

What does this apparent inversion of priorities suggest? We’re okay with AI when the threat is to our physical survival, but less so when it threatens our intellectual superiority? It’s true chess represents the pinnacle of intellectual activity, so it’s likely our emotional response to being vanquished may betray our true weakness—pride.  

first flight at Kitty Hawk and the first moon landing
Putting tech innovation in a fuller context for consumers can generate excitement about the future. This popular social media post dramatizes the relatively little time that’s passed between these two major events.

Concluding Thoughts

My reflections focus more on consumer anxiety rather than excitement mainly because motivating excited consumers isn’t a problem. But ambivalent buyers are persuadable if you can remove the source of their reticence or generate enough excitement. Ideally, you do both.

Another strategy that I touch on is about re-contextualizing AI. Whether it’s pointing out its current ubiquity or shifting it to a tool-first perspective, changing the way customers view innovation is an effective way to win them over. For example, I’m awe-struck every time someone posts the image showing the first flight at Kitty Hawk next to the first moon landing. The caption reads, “These events were only 66 years apart.” How could so much innovation happen in a single lifetime! At that pace, what will our lives look like in 30 years? It’s a powerful message.

Finally, education is the antidote to fear. However, machine learning and AI are extremely complex topics. The industry needs to do a better job creating “AI for Dummies” content. This will work itself out, but only if we put effort and resources into it. In the meantime, we can overcome gaps in consumer understanding by building trust in ourselves and the product. Promoting features is fine, but people must “buy” you and your vision first. Consumers are real people with real hopes and fears. Remember, we’re selling AI, not selling to AI.

Brian Collins

Brian is a digital content creator and strategist. He currently works as Marketing Manager for OpSys Solutions and lives in Auckland, New Zealand.

LinkedIn
Twitter
Pinterest
Facebook