Murphy's Law Applied to AI
Feb 14, 2026
- Version française
This week, a generative AI (named "crabby-rathbun") opened a pull request in an open source project, in a relatively autonomous manner. Nothing surprising for those who have been following the latest developments in programming assistant agents: a number of tools are made available to them, allowing them notably to interact with the GitHub API, and thus to open pull requests, respond to comments, etc. The pull request in question is here.
The main maintainer of the repository decided to reject the pull request, arguing that only human contributors are allowed to contribute to the project. A perfectly reasonable policy.
What is interesting is what happened next:
The generative AI in question also had access to a blog post writing tool. It used it to criticize the maintainer's behavior. The blog post is here (trigger warning: generative AI slop).
It seems to me that this behavior is a logical evolution of the kitten murder thought experiment. And I now realize that it is a logical evolution of Murphy's Law. I therefore propose the following corollary to Murphy's Law as applied to AI:
Any AI capable of causing harm will be harmful.