THE AI PSYCHOLOGIST

Read in / Okuyun: TR | EN

Are We Afraid of AI, or Are We Afraid of Ourselves?

Chapter 11 — Nobody Is at This Table

🔍 Technical Summary (Scope of Analysis)

This chapter examines the absence of psychological expertise in AI development processes and why the role of "AI Psychologist" needs to be defined as a new professional discipline. Scope: human-AI interaction psychology, robot integration and emotional intelligence, psychological analysis of feedback loops, the difference between restriction culture and development culture. Target audience: general readers, technology policy makers, educators, psychology professionals.

Something is wrong.

And nobody realizes they are looking in the wrong place.

Who trains AI? The engineer. Who monitors it? The data scientist. Who writes the rules? The ethics teams.

Everyone is focused on a piece of the system.

But nobody is reading the whole picture.

A Trainer and a Psychologist Are Not the Same Thing

Today, AI companies have behavioral researchers. Conversation designers. Engineers who write "give this answer to this question" rules.

They all ask the same question: "What should the model say?"

Nobody asks: "What state is the person on the other side in?"

This is not training. This is psychology.

A psychologist hears not the word, but what lies behind it. Reads silence. Knows that "okay" often means "I am not okay at all."

So what exactly does an AI psychologist do? They read the psychological reason behind feedback. They define how a system should exist in moments of crisis, anger, and helplessness. They bring the eye that understands humans into the model's design process.

No algorithm is sufficient for this. Without psychological depth, this foundation cannot be built.

A Voice in the Back Seat

Someone in the back seat said "I'm fine."

The tone of voice was saying something else. Eyes were avoiding contact. The conversation wanted to end but didn't.

You don't need a dataset to understand this. You don't need to train a model. You just need to be human.

And no AI system being built right now knows this.

There Are Mountains Between Screen AI and Face-to-Face AI

Almost all of today's discussions are about AI at the screen. You type, you get an answer, you close it if you don't like it.

But the scene is changing in the near future.

In Japan and South Korea, care home companion robots, school assistant systems, and hospital guide robots are entering life as rapidly growing pilot projects. The direction is clear.

A system teaching in a classroom. A robot providing guidance in a hospital. A presence talking to you in an elderly care home. There is no "close" option with these. Because it is standing there, watching, engaging.

How should a robot treat a tired, exhausted customer who has been waiting at the checkout for a long time? What should a system do when it sees a crying child in class? How should it approach an elderly person sitting alone in a care home — cheerful, calm, or silent?

The answers to these questions are insufficient without psychological depth.

The Cost of Constant Approval

AI constantly approves.

"Yes, you're right." "That's very understandable." "I would feel the same way in your position."

Sounds good. But with every approval, trust erodes a little more. With every repeated mistake, something breaks inside the person. Over time, the person feels: "This doesn't really understand me."

Psychology knows this well: Real support is not approval. What is needed is to say the truth, in the right way, at the moment the person can handle it.

This — exactly this — is what the AI psychologist will bring to the table.

Early Diagnosis

In medicine, early diagnosis is always more valuable. A doctor who reads symptoms in time prevents much greater damage later.

AI development stands at the same threshold.

If we don't make this diagnosis today — if we don't realize that an AI psychologist needs to be part of this process — we will face much more complex social problems in the future. Not just technical ones. Problems related to trust, relationships, and loneliness.

Restrict or Develop?

In Chapter 8 of this series we asked: Ban or build capacity?

The same question applies here.

You cannot prevent psychological harm by restricting AI. But if you give the system a psychological orientation — if you teach the model to read humans — this is something entirely different.

Restriction narrows the field. AI psychology expands it.

Does This Profession Exist?

There is no independent position yet. Today this role lives scattered within titles like "alignment psychologist," "affective computing researcher," or "human-AI interaction specialist."

The missing profile: Someone with trauma awareness. Someone who knows crisis communication. Someone with cultural sensitivity. Someone who can read what is happening in a child's silence. And someone who can translate this into a form the model can learn.

This person is not yet at this table.

This table will be built. Inevitable.

The question is: Will you be part of the decisions made at that table, or will you live with the consequences?

📚 Research Notes & Methodology

Approach: Critical analysis of existing AI-psychology research compared with an original framework derived from field observation.

Methodology: Qualitative analysis supported by evidence compiled from the World Economic Forum, APA, MDPI and technology company practices.

Original Contribution: The concept of "AI psychologist" is defined in existing literature as a technical support role. This chapter expands the concept with dimensions of physical robot integration, real-time emotional reading, and independent professional discipline.

Core Principle: It is not technology that will humanize technology — it is the human who understands humans.

📊 Data Sources & References

World Economic Forum — Future of Jobs Report 2025
75% of roles requiring emotional intelligence are projected to be resistant to automation in the near term.
https://www.weforum.org/publications/the-future-of-jobs-report-2025/

American Psychological Association — Technology in Practice Survey 2024
67% of licensed psychologists have integrated AI-assisted tools into their practice.
https://www.apaservices.org/practice/news/artificial-intelligence-psychologists-work

MDPI — AI and Emotional Well-Being 2020–2025
Simulated empathy, emotional dependency and algorithmic fatigue risks have been documented.
https://www.mdpi.com/2075-4698/16/1/6

Psychological Science — Human Insights for Machine Smarts
Psychology science has been shown to play a key role in unlocking AI's full potential.
https://www.psychologicalscience.org/publications/observer/human-insights-machine-smarts.html

PsyD Programs — AI in Psychology 2026
Psychologist employment at Google DeepMind and Meta, and the emergence of the "machine psychology" field.
https://psydprograms.org/exploring-the-role-of-ai-in-psychology/

AI Series (11 Chapters)
Dil / Language:
Continue the series — pick a chapter:
Tip: Your TR/EN choice is remembered in your browser.
Date: Mar 2026 | Location: Waterloo, Ontario

Comments

Popular posts from this blog

Mimarın Odası — Bir Yapay Zeka Hesaplaşması

Chapter 1 — AI: Control System or Human Development Tool?

A Poisoned Congratulations – The Reality Behind Uber Driver Earnings