Posts

Showing posts from February, 2026

Chapter 10 — Control or Guidance? | The Direction of AI Design

Image
Read in / Okuyun: TR | EN Control or Guidance? Chapter 10 — Design Culture and the Direction of AI Note: You can enable English/Turkish subtitles from the video settings. 📺 Watch on YouTube 🔍 Technical Summary (Scope of Analysis) This chapter examines whether artificial intelligence systems will evolve into mechanisms of control or platforms of guidance. The analysis is framed around algorithmic governance, human-centered AI, transparency, accountability, and meaningful human oversight. The focus is not on technical capability, but on design culture and governance models that shape how AI systems function in society. Will artificial intelligence become a system that governs us, or one that helps us grow? This is no longer a theoretical question. Algorithms already accompany many of our daily decisions. However, the issue is not technical capacity...

Chapter 9 — The Role of Experts in AI Governance

Image
Read in / Okuyun: TR | EN Are We Afraid of Artificial Intelligence, or of Ungoverned Power? Chapter 9 — The Role of Experts Note: You can enable English/Turkish subtitles from the video settings. 📺 Watch on YouTube 🔍 Technical Summary (Scope of Analysis) This chapter examines artificial intelligence not merely as a technical tool, but as a multi-layered system that directly interacts with decision-making processes. The analysis focuses on the need for interdisciplinary governance, framed around algorithmic accountability, risk-based regulation, and human-centered technology design. The central argument is that AI is not an autonomous authority; it is a system embedded within society whose impact depends on how it is governed. One of the most common mistakes in AI discussions is reducing the issue to engineering success. Yet artificial intelligence s...

Chapter 8 — Ban or Build Capacity?

Image
Read in / Okuyun: TR | EN Block It or Learn It? Chapter 8 — Ban or Build Capacity? Artificial intelligence: a technological mirror of humanity 🔍 Technical Overview (Scope of Analysis) This chapter examines the common regulatory reflex of banning emerging technologies and questions its long-term effectiveness. It argues that prohibition rarely eliminates technology; it merely pushes it into less visible and less regulated spaces. The discussion is not technology-centered but human- and system-centered. The focus is on digital literacy, critical thinking, and risk-based governance as mechanisms that transform technology from a perceived threat into a managed tool. When a new technology emerges, the first reaction is often restriction rather than reflection. Uncertainty creates discomfort. Rapid change produces insecurity. Human instinct tends to stop what it cannot fu...

Chapter 7 — Human Responsibility and Technology

Image
Read in / Okuyun: TR | EN Are We Afraid of Artificial Intelligence, or of Ourselves? Chapter 7 — Human Responsibility and Technology Artificial intelligence: humanity’s technological mirror 🔍 Technical Overview (Scope of Analysis) This chapter examines the growing role of AI systems in decision-making processes and the emerging questions of responsibility and accountability. Artificial intelligence is not an autonomous decision-maker; it is a tool that supports human judgment. Therefore, ultimate ethical responsibility remains with human actors, regardless of technical outputs. The discussion connects decision support systems, meaningful human oversight, transparency, explainability, and accountability to a broader framework: the issue is not technology itself, but the culture of use and governance capacity surrounding it. Imagine a meeting room. A graph appears on the sc...

Chapter 6 — You Cannot Assign Guilt to a Tool

Image
Read in / Okuyun: TR | EN Are We Afraid of AI, or of Ourselves? Chapter 6 — You Cannot Assign Guilt to a Tool AI: the technological mirror of the human mind 🔍 Technical Summary (Scope of Analysis) This chapter examines whether AI systems can be treated as moral or legal “perpetrators,” or whether responsibility remains anchored in the human decision chain. The core distinction is simple: AI is an output-producing tool. It does not generate intent, hold values, or carry moral accountability. The analysis focuses on how “the system said so” language can blur accountability, and why risk typically emerges not from the model alone but from design choices, data selection, and deployment context. The goal is to move the debate away from technology-as-actor and toward governance, institutions, and human responsibility. Imagine an operating room. A surgeon holds a scalpel and saves a life. The very sam...

Chapter 5 — Potential and the Risk of Over-Filtering

Image
Read in / Okuyun: TR | EN Are We Afraid of AI — or of Ourselves? Section 5 — Potential and the Risk of Over-Filtering AI: the balance between norms and potential 🔍 Technical Summary (Scope) This section examines how algorithmic systems classify deviation from the norm as “risk,” and how over-filtering can reduce long-term social capacity. The core focus is the balance between safety logic and the protection of human potential. The analysis covers anomaly detection, algorithmic bias, risk-based approaches, and human-centered AI principles. In a classroom, everyone gives the same answer — except one person. The atmosphere shifts slightly. Order feels stable when responses are similar. Difference introduces uncertainty. That moment can be interpreted in two ways. The person misunderstood the question. Or they saw something no one else noticed yet. A society’s...