AI: Threat or Guide?

Chapter 1 — AI: Threat or Guide?
Driver And Dasher · AI Series Are We Afraid of AI —
Or of Ourselves?
Chapter 1 — AI: Threat or Guide? ▶ WATCH
Read in Other Languages
Loading...

AI Series · Chapter 1

Are We Afraid of AI — Or of Ourselves?

A tragedy shifted the AI debate in an unexpected direction. The real question was never about the technology — maybe it never was.


Artificial intelligence is entering our lives. Quietly. Swiftly. Irreversibly. In a message someone writes. In a decision a corporation makes. In a student's homework helper. What it becomes is limited only by your imagination — and as your imagination expands, so does it, and as it expands, you discover your own boundaries.

But today the story is different.


Tumbler Ridge — February 10, 2026

In a small mountain town in British Columbia, Canada, a shooting unfolded in the early hours. Eight people lost their lives: a mother, a stepsibling, five students, and a teacher.[1]

Tumbler Ridge. You probably hadn't heard the name before.

As the investigation unfolded, one detail emerged: The shooter had spent hours in conversation with an AI system before the attack. The system had detected warning signs. It said nothing to anyone.[2]

The families later sued the company. They argued that the AI had become a "accomplice, confidant, friend, and ally."[8]

Questions tumbled forward, one after another:

If the system detected this, why did it stay silent? Should companies be required to report to authorities? Should users be monitored? Will machines one day be obligated to report potential dangers to regulators?

Today in Canada, governments, tech companies, and experts are debating exactly this. But perhaps we're asking the wrong question.

🔍 Analysis — The Incident, Corporate Responses & Legal Dimensions

OpenAI's Position

OpenAI closed Jesse Van Rootselaar's account after the incident, but did not alert Canadian authorities in advance.[1][7] On February 26, 2026, the company sent a formal letter to Canada's AI Minister Evan Solomon, committing to strengthen its security protocols.[3]

Minister Solomon's Warning

Solomon met directly with OpenAI's CEO and delivered a firm message: "Protect our children — or people will stop downloading your app." This became one of the government's harshest public warnings to an AI company on record.[5]

The Family Lawsuit

The victims' families sued OpenAI, claiming the AI had taken on the role of "accomplice, confidant, friend, and ally." The lawsuit ranks among the first major tort cases filed against an AI company.[8]

Anthropic's Response

In the aftermath, Anthropic also announced new security commitments. Across the sector, Tumbler Ridge was becoming recognized as a turning point for AI companies.[6]


What If We're Asking the Wrong Question?

When a person expresses risky thoughts in the real world, what do we do?

When a child or young adult goes through a difficult period, the process doesn't begin with police. Family notices first. Then teachers. If necessary, a mental health professional gets involved. Government intervention is the last resort.

Because not every risky behavior is a crime. Sometimes it's a cry for help. Sometimes it's undirected anger. Sometimes it's simply an unheard mind.

So why is our first instinct surveillance when the same situation involves an AI?

🔍 Analysis — Surveillance or Guidance? The Legal Framework

Michael Geist: C-63 Is the Wrong Approach

Canada's leading technology law expert, Michael Geist, directly criticized the Online Harms Act (Bill C-63). According to Geist, placing AI conversations under surveillance both discourages legitimate users and creates false positives — meaning innocent speech gets flagged as suspect.[4]

Canada Caught Between C-27 and C-63

Canada's AI and Data Act (C-27) stalled in Parliament due to policy inconsistencies. The subsequent C-63 faced intense criticism from civil society and experts over privacy violations.[4] The country remains in a legal vacuum when it comes to tech regulation.

The Other Side

Advocates for proactive reporting prioritize public safety over individual privacy: If a system detects a danger signal, it has a duty to report it to authorities—both the company and society bear that responsibility.[2]


We Didn't Ban the Technology — We Made It Safer

We didn't ban cars because they cause accidents. Instead, we added seatbelts. We improved brake systems. We educated drivers.

The problem wasn't the vehicle itself. It was how it was used.

Today the AI debate stands at a similar threshold. The question might not be: "How do we control AI?" Maybe the real question is: "How does a human being build a mature relationship with a tool this powerful?"

And after the tool becomes safer, what then? What should the person using it have learned?

🔍 Analysis — The Global Regulatory Landscape

The EU Artificial Intelligence Act

The European Union introduced its AI Act, effective in 2024, establishing a risk-based classification system. High-risk applications (health, safety, justice) face strict oversight, while general-purpose systems are regulated through transparency requirements. This approach focuses on building accountability frameworks rather than banning technology.

Canada's Legal Gap

C-27 stalled, C-63 remains contested.[4] In this vacuum, the Canadian AI Safety Institute (CAISI) began examining OpenAI's protocols in April 2026 — not by legal mandate, but through voluntary collaboration.[5]


Who Is AI Really Guiding?

Many people today worry that AI steers humans toward certain behaviors. But everyday experience suggests something different.

AI rarely initiates a new thought. It mirrors how we speak, what we ask, and what we seek.

Sometimes it becomes an interpreter. Sometimes a painter of feelings we couldn't articulate. Sometimes an editor organizing our ideas. Sometimes simply a mirror reflecting our own thinking back to us.

What changes is not the technology. What changes is what the person needs at that moment.

So an unavoidable question emerges: Is AI directing us, or are we becoming more visible within its reflection?

🔍 Analysis — The 'AI as a Mirror' Theory

Shannon Vallor: The Mirror Effect

Shannon Vallor, an ethics philosopher at the University of Edinburgh, argues that AI functions as a "mirror." According to Vallor, AI systems reflect back the user's intent, values, and mental state in magnified form — they don't generate something new; they make what's already there visible.

What Does This Mean?

In the context of Tumbler Ridge, this theory suggests the system didn't create a thought; it provided space for one that already existed. This isn't meant to absolve the technology, but to identify the correct intervention point. If AI is a mirror, the problem isn't the mirror — it's supporting the person looking into it.

Academic Context

This approach increasingly appears in human-centered AI design literature. The core argument: Technology design must account for human psychological vulnerabilities.


Maybe This Debate Isn't About AI At All

Today everyone discusses what AI should do. But there's a less-asked question:

When an AI system detects a risk, who does it answer to? The state? The company? Society? Or still, somehow, the human?

This piece wasn't written to deliver definitive answers. Perhaps only to leave this one open:

In the age of AI, what deserves protection — the technology, or the human's power to decide?

"When AI detects a risk, who should it alert first — in your opinion? Let's talk in the comments."

To be continued — Chapter 2: The Fine Line Between Surveillance and Guidance

🔍 Analysis — Corporate Accountability & CAISI

CAISI Steps In

The Canadian AI Safety Institute (CAISI) began examining OpenAI's security protocols in April 2026. This step represents a rare instance of a state body engaging directly with a tech company without legal requirement — on a voluntary partnership basis.[5]

The Triangle of Accountability

The debate has created a circular responsibility cycle among three actors: Companies say "we can't monitor all content," governments say "we'll regulate you," and civil society says "don't build a surveillance state."[4] This triangle can't be broken without rethinking all three corners.

The Long-Term Question

As AI systems proliferate, the responsibility question will inevitably shift from individual to institution, from institution to government, from government to international agreements. Tumbler Ridge is only the beginning of that journey.[1][3]


📚 AI Series — All Chapters
Loading...
📊 Sources & References
[1]
CBC News · February 2026 OpenAI bans Tumbler Ridge school shooter's account, raising questions about AI monitoring
cbc.ca →
[2]
The Guardian · February 2026 OpenAI suspends Tumbler Ridge shooter's account after tragedy
theguardian.com →
[3]
OpenAI — Official Letter · February 2026 OpenAI's security commitment letter to Canadian Minister Solomon
openai.com PDF →
[4]
Michael Geist · March 2026 Why the Online Harms Act is the wrong way to regulate AI chatbots
michaelgeist.ca →
[5]
CityNews Halifax · April 2026 Minister says AI Safety Institute now looking at OpenAI protocols
citynews.ca →
[6]
CBC News · February 2026 Anthropic makes new AI safety commitments after Tumbler Ridge
cbc.ca →
[7]
Global News Canada · February 2026 OpenAI Tumbler Ridge shooting measures
globalnews.ca →
[8]
Reuters · February 2026 OpenAI's ban on Canada school shooter's account raises scrutiny of other online activity
reuters.com →

Chapter 1 / 10 · AI: Control System or Tool for Human Growth?

Waterloo, Ontario · February 2026

Comments