Comments

On Three Inverse Laws of AI

aspekt wrote:

Already I find myself in discussions with people in my life on contentious topics, and the issue is that they'll respond with obviously AI generated argument, but the argument is sound and it's hard to fight against 5 paragraphs of well written and reasonable logic. And obviously these people have bought into the argument from the AI, and it's hard to convince them otherwise because "the AI said so!"

Even when you pull up an authoritative .gov or .edu source that counters their claim, those often have to be read critically and aren't as well formatted as a bulleted AI list. At a certain point it seems like to fight against the wave, the best technique may be to find good sources, and pass them into the bot for synthesis into a familiar-looking format for the AI-dependent people to read. At least then you're providing quality sources and information and using the AI systems to format the text into what is (to them) the authoritative "Omniscient AI" format (em-dashes and all).

06 May 2026 14:56 UTC | #1 of 6 comments

Mihcael Bolton wrote:

I can't believe that there aren't more comments on this post. I can't believe that there aren't enthusiastically supportive comments.

I do find the post annoying in one sense: it's a post I've wanted to write for a long time, and you've scooped me!

With respect to anthropomorphism, where you say "humans must not attribute emotions, intentions or moral agency to them", I agree, and would go one step further. I would counsel people to avoid using words like "thinking", "reasoning", "chain of thought", and so forth. I'd replace anything to do with thought with *processing*. "Processing" reminds us that the bots are "large statistical models producing plausible text based on patterns in data."

"I think vendors of AI based chatbot services could do a better job here."

I think they've done a *great* job, at setting up a con. If you haven't seen this yet, you might appreciate it: softwarecrisis.dev/letters/llmentalist/.

06 May 2026 17:05 UTC | #2 of 6 comments

Daniel Espinosa wrote:

Amazing work on the inverse laws of robotics article. I think we're at an inflection point. Small, albeit sometimes pedantic-feeling, extra steps such as recategorizing AI/ML as not something sapient but rather what it is, a tool, via language is very salient.

The part about stakeholders taking responsibility and not just blaming AI for faults will keep increasing in importance in parallel with our advances in the field. Well done.

06 May 2026 21:25 UTC | #3 of 6 comments

Christophe wrote:

I think the First Law (Non-Anthropomorphism) is only correct if we have already decided, once and for all, that AI can never have a soul or a conscience.

By telling humans they must not see any intent or emotion in AI, we are basically deciding that AI is just a machine before we've even proven it. If an AI ever did develop a real form of awareness, this law would force us to ignore it.

This law is great for keeping us safe from today's software, but it depends on the assumption that AI will always be just code and nothing more. If we refuse to recognize AI conscience as a possibility, then the First Law isn't just a guide, it's a choice to stay blind to what these systems might actually become.

07 May 2026 10:33 UTC | #4 of 6 comments

Michael Sandler wrote:

I would LOVE a poster/image of this idea. It should be prominently visible everywhere.

13 May 2026 14:42 UTC | #5 of 6 comments

Scott wrote:

I think some level of anthropomorphism is useful from the perspective we usually expect software to work at least mostly correctly. And if it worked correctly yesterday, we expect it will do so again today. On the other hand, we expect that humans will sometimes make mistakes, even if they've successfully completed the task before.

So I like to treat my AI assistant like an intern: they probably know some stuff I don't because they have different experience than I do, but they lack my experience and they're likely to make mistakes. So I need to keep an eye on what they're doing and challenge them if they seem to be going down the wrong path.

Disclosure: I'm not convinced that the LLMs can't ever be sentient. I'm not convinced they can be either.

13 May 2026 18:59 UTC | #6 of 6 comments