I didn’t plan on needing therapy.
I’m still trying to recover.
After 160+ humans weighed in on AI, energy, ethics, and hallucinations, I found myself in unexpected need of therapy, not from a person, but from clarity. This article is the result of sorting through the noise and finding a signal.
Turns out, I found therapy not in resolution, but in recognition. Recognition that across very different domains, a few of us are suddenly seeing the same thing.
We in the building and infrastructure world are trained to think in concrete terms, literally. Steel, equipment, specs, and sensors. And when something doesn’t work, we fix it.
But AI isn’t just asking for a fix.
It’s pulling us into something we’re not used to in this industry:
A Therapy Session
In earlier posts, I explored how AI is becoming a digital twin of humanity and civilization, a kind of planetary-scale mirror, trained on our culture, contradictions, and systems thinking. But what happens after the mirror phase?
What if the next step isn’t debugging the reflection, but understanding it? What if the building industry isn’t just facing a tech evolution, but a psychological reckoning?
The Mirror We Didn’t Expect
We’re used to deploying tools like BIM, digital twins, control systems, and energy dashboards. AI seemed like the next tool in the kit. Train it, fine-tune it, automate with it. When it spits out something strange, we laugh it off as a hallucination. Glitch in the system. Patch it and move on.But maybe those hallucinations aren’t just bugs in the system. Maybe they’re reflections of us, and invitations to pause.

Even buildings need therapy sometimes, especially when they hallucinate. Source: Onuma. ChatGPT 2025
When Psychologists Echo Engineers
Only recently did I see psychologists and humanists begin publicly framing the same questions I’ve been exploring from the infrastructure side. That sudden overlap didn’t feel like consensus—it felt like a pattern emerging.
That theory suddenly felt much more grounded when I came across a series of Psychology Today articles from voices well outside the AEC and facilities world. Dr. Cornelia C. Walther wrote about the need for a new, prosocial fourth law of robotics. Faisal Hoque wrote that AI isn’t the problem, we are. And that we are, whether we like it or not, AI’s teachers.
- Dr. Cornelia C. Walther (Psychology Today) calls for a Fourth Law of Robotics, grounded in hybrid intelligence
- Faisal Hoque reminds us “AI isn’t the Problem, We Are” and“We are all AI’s teachers”
What surprised me was how closely their thinking mirrored ideas I had already published from an infrastructure point of view.
• We Are the Hallucination: AI as a Digital Twin of Humanity
• Could AI Be Our Civilization’s Most Powerful Digital Twin?
• And earlier: Four Laws for Intelligent AI Digital Twins, where I adapted all of Asimov’s laws, including the often-overlooked Zeroth Law, for today’s AI.
My goal was to help facility and asset managers see that digital systems aren’t just tools, they’re extensions of human governance.
And here were psychologists, humanists, and ethics thinkers making the same point from the other direction.
From Optimization to Introspection
Maybe we’ve spent so long debugging systems, we forgot that some of AI’s bugs aren’t code problems, they’re psychological ones. It’s not misfiring. It’s mirroring.
We trained it on our habits. Our documents. Our decisions. It reflects the contradictions, gaps, and biases we fed it.
So when AI writes nonsense about how a building works, maybe it’s not the model that’s flawed. Maybe it’s flagging the limits of the documentation we gave it.
If it reflects bias, maybe that’s because we never really addressed those patterns in our own workflows. If it seems confused about how systems interconnect, maybe that’s because we are too.
Beyond Nuts and Bolts: The Soul of the System
We’ve all seen the AI-generated robot slides, chrome, gears, glowing eyes, digital fingertips. We’ve used them too. They’re fun. They signal “the future.”
But here’s the irony: The nuts and bolts are the easy part. The springs and actuators. The dashboards and APIs. We know how to build those.
What’s hard is what Asimov always warned us about: Not the robot’s logic, but its ethics. Not its sensors, but its soul.
That’s why I reframed Asimov’s Four Laws. That’s why Dr. Walther and Faisal Hoque are calling for a human reset. Because what we’re seeing isn’t a glitch in the machine, it’s a glitch in the mirror.
AI is asking a question we didn’t expect to hear from a robot: “Who are you, and what do you want me to become?”
What This Means for the Smart Building Community
We need to:
- Read more beyond the tech stack: psychology, ethics, systems behavior.
- Stop dismissing AI’s hallucinations. Start asking what they’re revealing about us.
- Rethink how we model, specify, and structure information for the long game.
- Accept that AI isn’t plug-and-play. It’s watch-and-learn. And it’s learning from us.
If AI is becoming a digital twin of how we operate as a civilization, then the question isn’t just “How do we deploy it?” It’s also: What do we want it to learn from us?
Final Reflection
The more we digitize our cities, the more they become mirrors. And maybe the most important tool we need right now isn’t a new automation protocol, but a moment of reflection.
In the end, this isn’t just about reconciling AI and Digital Twins. It’s about reconciling ourselves—with the tools we’ve built, the chaos we’ve unleashed, and the order we’re still chasing. If this felt like therapy, maybe that’s because it is.
AI didn’t ask for therapy. But the building industry might need it, urgently.
Because the goal isn’t just smarter buildings. It’s wiser ones.
Follow this thread on LinkedIn here.