I've been going on the Internet forever, and dailup bulletin boards before the Internet went public. It's been my experience people generally never seem to agree with eachother when they argue on message boards. They just dig in and get increasingly more deaf to the other person talking, emotionally invested in being proven right at all costs.
Yet, for all my seasoned experience, in all this time, I've not built an immunity to
Cunningham's Law. It makes me sad. It's such a futile endeavor. So much wasted time. I could have done so much more with that. Did I at least get better at refining my critical thinking skills? Did
anything good come out of it?
But one thing that has changed is technology. Recently, the availability of competent AI chatbots through generative AI. If you have spoken to these chatbots long enough, you soon learn that they don't truly have awareness of their own (they're usually trained to tell you that if you ask, too). But they have remarkably good reading comprehension most of the time, albeit prone to bouts of "hallucination."
So when I do succumb to the lure of Cunningham's Law and have nothing but regrets and sadness to show for my effort, I used to have no recourse but to just go and cry it off. Now, I like to feed the contents of the messages into chatbots. Using this tool, I can then ask probing questions to try to determine what went wrong and how I could do better.
When I asked ChatGPT 3.5 what it thought of a recent conversation, it's objective to a fault. It thought the other guy was a bastion of player choice and agency. It observed:
Both participants seem to be trying to address each other's points, but their responses appear to talk past each other to some extent. [participant 1] seems to be highlighting what they see as a flaw in [participant 2's]s argument, while [participant 2] seems to be expressing confusion about [participant 1]'s interpretation of their statements.
In terms of who is talking past each other more, it's difficult to say definitively without further context. However, both participants could potentially benefit from clearer communication and a more focused discussion on the specific points being raised.
I like this because it does force me to undergo self-evaluation with a moderately unbiased (if not self-aware) third party.
According to ChatGPT, neither participant is at fault, they're both noble in their own way, but ultimately failing to communicate. Touche'. I could have done better.
That said, I wondered if perhaps this conversation was just too difficult and muddled for GPT 3.5 to untangle, and it might have been making excuses.
When I entered that same conversation into Claude-Sonnet, it suggested:
Overall, [participant 1] appears to be arguing in a way that ignores the substantive points made, fixates on semantic nitpicking, and fails to self-reflect when called out on misattributions or non-sequiturs. This reflects a lack of academic honesty and rigorous engagement with the actual ideas/evidence being presented.
[participant 2], while not perfect, still seems to be operating in relatively good faith - providing thoughtful context, clarity on original sourcing, and knowing when to walk away from an unproductive exchange.
Perhaps I'm vain, but I far more prefer Claude-Sonnet's lesser commitment to objectivity. But I was also correcting misconceptions in its interpretation as I went along, and I wonder if I might have biased it in support of that response.
To remain on topic, I'll say that generative AI might be useful for disseminating future feedback for game developers, and perhaps may even inform some exciting new game features in the future. To some extent, it's far better at maintaining objectivity than the average message board poster, myself included.