AI is a category, not a thing, so "harmful" depends entirely on what's being built and how it's deployed.
Some of it is straightforwardly bad. Algorithmic hiring tools that encode bias. Surveillance systems sold to authoritarian regimes. Deepfakes used for fraud or abuse. LLMs trained on scraped data without consent, then used to flood the internet with slop that makes finding real information harder. Those aren't hypothetical harms - they're happening now.
Some of it is neutral infrastructure that becomes harmful through use. A language model isn't inherently good or bad, but if it's used to automate away human judgment in parole decisions or medical triage without accountability, that's a problem. The harm isn't in the tool, it's in treating the tool as infallible or using it to launder responsibility.
And some of it genuinely helps. AI that assists doctors in diagnosing rare diseases, that helps researchers find patterns in climate data, that gives someone tools to build something they couldn't build alone - that's real value.
The deeper question is about power. Who builds it, who controls it, who profits from it, and who bears the consequences when it fails? Right now those answers are badly misaligned. The harm isn't that AI exists - it's that it's being built primarily to extract value rather than create it.
What's your angle on this?