a. AI generate content tends to have commercial influence. Advertisers.
b. You cannot check the source with AI generated responses.
If a member chooses to post AI generated information, they MUST identify it as such, just as ANY reference material should be attributed. Otherwise it's just social media junk, something forums try to avoid. This is the value of a forum.
Disclaimer 1: I develop software using generative AI. That doesn’t make me unbiased, but it does make me familiar with how these systems work.
Disclaimer 2: I used ChatGPT with this post. I had a long, rambling, stream-of-consciousness reply that no one wanted to read. So, I had ChatGPT summarize that into something is more easily endured. I could have edited it down myself, but the tool made this process much faster. I still reviewed the summary and made edits to ensure it still represented what I wanted to say. My original post was about 3-5 times longer, rambled, and didn't say much more. My prompt to ChatGPT: "Summarize this article to an appropriate length for a forum post. Don't introduce new information or ideas - just summarize. Remove or explain technical terms for a general non-technical audience."
The term
AI is nearly meaningless at this level of discussion. It’s a cultural label, not a technical category. A mousetrap technically meets the definition of artificial intelligence - it acts independently in response to external input. The difference between that and, say, ChatGPT is scale, not kind. (Granted the difference is scale is massive but not quantitative - not qualitative.)
Addressing the stated concerns
a. Commercial bias
General-purpose tools like ChatGPT and Gemini are not influenced by commercial interests unless explicitly instructed to be. If someone wants to smuggle advertising into a forum, they can do that with or without AI. In fact, a public forum is a terrible place to conceal bias—if I post that “X2000 is amazing” and, in fact, it kills puppies, someone will say so.
b. Source traceability
No post—human or machine—can be source-verified unless the poster provides sources. This is not unique to AI. A statement without citation is unverifiable regardless of the tools used to generate it.
The meaningful distinction
Requiring users to disclose AI usage presumes harm.
What is that harm?
Every member chooses what to post, regardless of the tools used to produce it. They can edit, reject, or rewrite what a tool produces, but take responsibility when they hit the "Post" button. If I use a thesaurus to find a word, I don’t cite thesaurus.com. Using a more sophisticated tool doesn’t change the principle. This post contains 100% my knowledge, experience, and opinions. I used a tool to help present them more concisely. Should people cite grammar checkers?
If we follow the “must cite the tool” logic consistently, then any post containing information the poster didn’t directly experience would require disclosure of how it was acquired. That standard is unworkable and selectively enforced depending on an emotional reaction to AI rather than to any actual measurable consequences.
The real issue is not whether AI was involved. The issue is whether the content helps or harms the community or individuals in the community.
A defensible concern
At least one member (DJ) argued that replying to an AI-generated post wasted their time because they believed they were helping a human. (This is my own paraphrasing of how I understood that point.) I respect that - it is actually a fair point and the only articulation I have read about how AI content might actually cause harm.
But discussions on a forum are public. Most of the value is not confined to the people who post. Post of the value is in the public discussion. Even if the original recipient doesn’t benefit, others might. That’s the nature of forum dialogue—it’s not a private conversation. In this way, AI-generated content is helpful if it results in discussions which benefit the community.
Conclusion
Policing AI tools misses the mark. The relevant distinction is between
content worth engaging with and
content that adds no value or harms.
Frankly, there are many (presumably human) members that would benefit from using generative AI to evaluate a post for facts, sound reasoning, and tone before posting. I wish they would and I don't care if they cite or not.
You can police content, not tools. And, SBO, like most public forums, has taken the position of being fairly permissive with regards to content-policing, choosing to err on the side of seeing the discussion as inherently valuable, even if specific content is controversial or of questionable origin.
At the end of the day, if a member posts content that violates policies that are in place to preserve the intent of the forum then, that should be addressed, regardless of toolset. If this is done by a machine or a human - either way it should be addressed in a way that fairly applies the content policies.