AI and the SBO Forums--Part 1

Dave

Forum Admin, Gen II
Staff member
Feb 1, 2023
103
So we can assume AI is not autonomously jumping in and posting answers and the problem then is that actual human users are just copying and pasting or paraphrasing AI answers without vetting them or saying that is what it is. Is that correct?
That is correct. To the best of my knowledge all of members are genuine living and breathing humans.

So I assume that means you have no way of really controlling the posting of AI answers except by asking users to identify them as AI generated and then admonishing them if they don't and they are found out. Is that right?
SBO like most forums and social media does not attempt to monitor posts for accuracy. We have not decided upon any course of action at this time. This thread was started to get a sense of what the SBO community thinks should happen and how big of an issue it is. Once we have determined what if anything we will do it will be posted and made clear to members.
 
Jan 7, 2011
5,835
Oday 322 East Chicago, IN
A bit off topic, but for fun I posted The NYT “Connections” game words to ChatGPT and Google Gemini.

If you are not familiar with the game, you get 16 words, and have to determine 4 sets of 4 words that are somehow “connected”.
The categories run from easy to very tricky.

I was impressed that both AI renditions took a screen shot of the 16 words and correctly put them into text and AI could read them.

Unfortunately, neither ChatGPT or Gemini correctly connected the various word choices. They did offer suggested matching, but they were not what the creators at the NYT were looking for. Interestingly, the AI responses “explained” or justified their answers, and even admitted that some of them were a “stretch”.

This game takes some thought, some gut instinct, and a wide knowledge of useless facts. And it requires making some “leaps” in logic.

AI isn’t up to the task...at least not yet.

Greg
Uh oh, I tried this again today (just with ChatGPT).

I did the puzzle first, and it was pretty easy…finished in a minute or so.

but then so did ChatGPT….

it is learning :yikes:
 
May 2, 2020
41
Westerly Conway MKII 36 Indian Rocks Beach
Personally I'm pretty sick and tired of dealing with AI generated content. I find it especially irritating when a question is posed that comes from an AI generated source.

I have no issue with folks searching using AI and then saying something to the nature of "this is what I found through AI" and then copying the info there. Essentially all us humans identify ourselves - the impersonation of humans by AI I find disturbing, and deceitful.

My preference would be that AI should not be allowed to post. If some human is using AI to help identify the answer to a question that's fine - but a strictly AI generated question should be banned - or perhaps as a minimum, be identified as such. If AI is answering a question posted I question if it should be allowed, but at a minimum it should be identified as such.

In summary: I would prefer AI not be allowed to post, but at a minimum it should be identified as being AI. I have no issue with people using AI to help find an answer, but it should be identified as such.

dj
100% agree . No AI or if unavoidable , identified as AI.
 
Sep 30, 2013
3,641
1988 Catalina 22 North Florida
... if someone came into the thread with a generic AI pasted explanation of the difference between epoxy and polyester, with no personal value added ...
Great example. I would be very annoyed.

It would imply that I was so dimwitted that I was incapable of using a search engine.
 
  • Ha
Likes: jssailem
Nov 20, 2025
21
Alden 60' Schooner Killybegs
I’ll say this much: I’ve met plenty of humans who speak in boilerplate nonsense and offer bad advice with great confidence, and I’ve met a few machines that manage the opposite. The tool isn’t the danger. The false certainty is - and that's the fault of the recipient, not the sender.

If you can spot nonsense from a stranger on the dock, you can spot it in a forum post. If you miss the signs and get led astray, it’s the getting-astray part that stings - not whether the culprit had a pulse or a power cord.

But there is a fair point buried in the unease. Sailing is one of those crafts where experience leaves fingerprints on everything you say - the way you talk about rigging failures, weather hums, the particular dread of a pot warp wrapping the prop at sunset. That’s hard to fake. Maybe impossible. And people value that because it feels ... lived-in.

Still, I’m not convinced the presence of a synthetic voice makes the place any less real. Every pub I’ve ever been in had at least one lad giving advice he learned from a book, a cousin, or a very optimistic YouTube video. Forums are no different.

I'm just about as old-school as they still come. But, if a post gives you something useful, well - use it. Sailing’s always been a blend of old tricks and new tools. Charts went from charcoal on vellum to glowing screens, and people grumbled every step of the way. Yet here we are, still managing to find our way into trouble and back out again.

In the end, the sea doesn’t care where you got the advice. It only cares whether it was right.
 

dLj

.
Mar 23, 2017
4,743
Belliure 41 Back in the Chesapeake
I’ll say this much: I’ve met plenty of humans who speak in boilerplate nonsense and offer bad advice with great confidence, and I’ve met a few machines that manage the opposite. The tool isn’t the danger. The false certainty is - and that's the fault of the recipient, not the sender.

If you can spot nonsense from a stranger on the dock, you can spot it in a forum post. If you miss the signs and get led astray, it’s the getting-astray part that stings - not whether the culprit had a pulse or a power cord.

But there is a fair point buried in the unease. Sailing is one of those crafts where experience leaves fingerprints on everything you say - the way you talk about rigging failures, weather hums, the particular dread of a pot warp wrapping the prop at sunset. That’s hard to fake. Maybe impossible. And people value that because it feels ... lived-in.

Still, I’m not convinced the presence of a synthetic voice makes the place any less real. Every pub I’ve ever been in had at least one lad giving advice he learned from a book, a cousin, or a very optimistic YouTube video. Forums are no different.

I'm just about as old-school as they still come. But, if a post gives you something useful, well - use it. Sailing’s always been a blend of old tricks and new tools. Charts went from charcoal on vellum to glowing screens, and people grumbled every step of the way. Yet here we are, still managing to find our way into trouble and back out again.

In the end, the sea doesn’t care where you got the advice. It only cares whether it was right.
While I agree with your fundamental premise - that it's up to the user of the information to discern what's value added or not - my main concern is not that. If I'm talking to a stranger on the dock, or a lad giving advice in a pub - in all those cases, I know I'm talking to a human. In the outside world if I'm interfacing with a human or a machine, I know I'm interfacing with either a human or a machine.

However, in a forum type interface, the machine impersonates the human. As there is no other "contact" other than through some login name; there is no identifying information other than having to sit back and figure out - really guess - as to human or machine origin of that query. For me, it adds a layer of filtering that I would really prefer to not to have to do. I would much rather have at least one place in the universe of online conversations where I don't have to concern myself with determining if I'm talking to a machine or a human. I strongly feel the impersonation of humans by machine is deceitful. It would be similar to me going for a job interview and saying I was an expert in something I know nothing about, but I'm qualified simply because I can think really well - or some other attribute that I might consider I have.

In my opinion, AI has no place here posting questions - full stop.

So far here on SBO, I haven't seen threads started by AI with some question, but in other forums I have. In those cases, not only do I have to think about if the question is worth my time answering, I also have to determine if it's written by a human who actually wants to know, or if it's written by AI for reasons I can't even imagine.

Now, when humans are looking to answer questions that other humans have asked, then using AI to help formulate the answer can often be quite useful. So I don't object to that per say, however I do feel it is good decorum to identify what information is being given from an AI search vs personal experience.

dj
 
  • Like
Likes: pgandw
Oct 31, 2024
14
Precision 23………. St. Petersburg, FL
Hmm... How do we know "Hal 9000" isnt moderating the forum now and uses the name "Dave" to throw us off?
lol. That’s funny, but not funny.
I vote that we ban all AI content on here, if possible. I will go to the Goggle machine if I want to read AI content.
 
Mar 26, 2011
3,833
Corsair F-24 MK I Deale, MD
a. AI generate content tends to have commercial influence. Advertisers.
b. You cannot check the source with AI generated responses.

If a member chooses to post AI generated information, they MUST identify it as such, just as ANY reference material should be attributed. Otherwise it's just social media junk, something forums try to avoid. This is the value of a forum.
 
May 17, 2004
5,882
Beneteau Oceanis 37 Havre de Grace
a. AI generate content tends to have commercial influence. Advertisers.
b. You cannot check the source with AI generated responses.

If a member chooses to post AI generated information, they MUST identify it as such, just as ANY reference material should be attributed. Otherwise it's just social media junk, something forums try to avoid. This is the value of a forum.
I agree with you in general that any reference material should be attributed. I’m not so sure on your first two points. My understanding of LLM output is that it’s a “prediction engine”, guessing each subsequent word based on the words that came before, relative to its training data. I haven’t seen anything about that being commercially biased, at least not any more than the internet as a whole that the LLM’s were trained on. There’s early talk about mixing sponsored ads into LLM output but I didn’t think anything like that was done yet. Am I missing anything there?

As for checking sources, there are some AI applications that make that easier than others. Perplexity.ai, for example, does work better for this. Rather than answering a question based just on its training data, Perplexity basically does an internet search for the question and then uses an LLM to summarize the pages that come up in the search. Each of the links to the search results are presented so you can go back and check their reliability. This is just one example, and it certainly isn’t foolproof (for example it might just summarize a bunch of unreviewed forum posts), but at least it does avoid the problem of un-cited sources.
 

jssailem

SBO Weather and Forecasting Forum Jim & John
Oct 22, 2014
23,769
CAL 35 Cruiser #21 moored EVERETT WA
Perplexity basically does an internet search for the question and then uses an LLM to summarize the pages that come up in the search.
How is this AI different than any sailor asking a question and getting a response on the forum to search the forum/Defender/Catalina direct for the answer or just “Google that”? It seams that the AI is functioning like a powerful search engine.
 
Nov 22, 2011
1,272
Ericson 26-2 San Pedro, CA
As for checking sources, there are some AI applications that make that easier than others. Perplexity.ai, for example, does work better for this. Rather than answering a question based just on its training data, Perplexity basically does an internet search for the question and then uses an LLM to summarize the pages that come up in the search. Each of the links to the search results are presented so you can go back and check their reliability.
I use Perplexity a fair amount, with mixed results. Yes, it does present the links so you can check on their reliability, but please do check! I have had it come up with answers that were wildly wrong and linked to pages that bore absolutely zero relationship to the conclusions drawn.
 

RussC

.
Sep 11, 2015
1,680
Merit 22- Oregon lakes
How is this AI different than any sailor asking a question and getting a response on the forum to search the forum/Defender/Catalina direct for the answer or just “Google that”? It seams that the AI is functioning like a powerful search engine.
That's always been my observation also. I fail to see any "intelligence" in AI at all. but rather just a bot going out and gathering whatever the prevalent belief online happens to be and putting that into a somewhat cohesive reply as fact.
 
Jan 11, 2014
13,405
Sabre 362 113 Fair Haven, NY
How is this AI different than any sailor asking a question and getting a response on the forum to search the forum/Defender/Catalina direct for the answer or just “Google that”? It seams that the AI is functioning like a powerful search engine.
The issue is not using AI, the issue is presenting information generated by AI without citing that it is not your own work, rather it is someone else's work. That is Plagiarism, which is generally considered unacceptable as it is tantamount to lying and/or stealing intellectual property.

A google search only returns sources, it does not create a new document with any kind of synthesis of information.
 
Apr 25, 2024
705
Fuji 32 Bellingham
a. AI generate content tends to have commercial influence. Advertisers.
b. You cannot check the source with AI generated responses.

If a member chooses to post AI generated information, they MUST identify it as such, just as ANY reference material should be attributed. Otherwise it's just social media junk, something forums try to avoid. This is the value of a forum.
Disclaimer 1: I develop software using generative AI. That doesn’t make me unbiased, but it does make me familiar with how these systems work.
Disclaimer 2: I used ChatGPT with this post. I had a long, rambling, stream-of-consciousness reply that no one wanted to read. So, I had ChatGPT summarize that into something is more easily endured. I could have edited it down myself, but the tool made this process much faster. I still reviewed the summary and made edits to ensure it still represented what I wanted to say. My original post was about 3-5 times longer, rambled, and didn't say much more. My prompt to ChatGPT: "Summarize this article to an appropriate length for a forum post. Don't introduce new information or ideas - just summarize. Remove or explain technical terms for a general non-technical audience."


The term AI is nearly meaningless at this level of discussion. It’s a cultural label, not a technical category. A mousetrap technically meets the definition of artificial intelligence - it acts independently in response to external input. The difference between that and, say, ChatGPT is scale, not kind. (Granted the difference is scale is massive but not quantitative - not qualitative.)

Addressing the stated concerns

a. Commercial bias
General-purpose tools like ChatGPT and Gemini are not influenced by commercial interests unless explicitly instructed to be. If someone wants to smuggle advertising into a forum, they can do that with or without AI. In fact, a public forum is a terrible place to conceal bias—if I post that “X2000 is amazing” and, in fact, it kills puppies, someone will say so.

b. Source traceability
No post—human or machine—can be source-verified unless the poster provides sources. This is not unique to AI. A statement without citation is unverifiable regardless of the tools used to generate it.

The meaningful distinction
Requiring users to disclose AI usage presumes harm.

What is that harm?

Every member chooses what to post, regardless of the tools used to produce it. They can edit, reject, or rewrite what a tool produces, but take responsibility when they hit the "Post" button. If I use a thesaurus to find a word, I don’t cite thesaurus.com. Using a more sophisticated tool doesn’t change the principle. This post contains 100% my knowledge, experience, and opinions. I used a tool to help present them more concisely. Should people cite grammar checkers?

If we follow the “must cite the tool” logic consistently, then any post containing information the poster didn’t directly experience would require disclosure of how it was acquired. That standard is unworkable and selectively enforced depending on an emotional reaction to AI rather than to any actual measurable consequences.

The real issue is not whether AI was involved. The issue is whether the content helps or harms the community or individuals in the community.

A defensible concern
At least one member (DJ) argued that replying to an AI-generated post wasted their time because they believed they were helping a human. (This is my own paraphrasing of how I understood that point.) I respect that - it is actually a fair point and the only articulation I have read about how AI content might actually cause harm.

But discussions on a forum are public. Most of the value is not confined to the people who post. Post of the value is in the public discussion. Even if the original recipient doesn’t benefit, others might. That’s the nature of forum dialogue—it’s not a private conversation. In this way, AI-generated content is helpful if it results in discussions which benefit the community.

Conclusion
Policing AI tools misses the mark. The relevant distinction is between content worth engaging with and content that adds no value or harms.

Frankly, there are many (presumably human) members that would benefit from using generative AI to evaluate a post for facts, sound reasoning, and tone before posting. I wish they would and I don't care if they cite or not.

You can police content, not tools. And, SBO, like most public forums, has taken the position of being fairly permissive with regards to content-policing, choosing to err on the side of seeing the discussion as inherently valuable, even if specific content is controversial or of questionable origin.

At the end of the day, if a member posts content that violates policies that are in place to preserve the intent of the forum then, that should be addressed, regardless of toolset. If this is done by a machine or a human - either way it should be addressed in a way that fairly applies the content policies.
 
May 10, 2004
114
Hunter 340 Bremerton, WA up from Woodland
My vote as a long time forum member - I come here to interact with fellow sailors. I am not interested in reading anything that is not based on personal experience. No so-called AI, no blowhards looking up answers on search engines, etc. Since AI is just a new branded packaging of the same old predictive text / image / speech pattern engines, has no actual intelligence, judgement, or soul, it has no place in a person-to-person community. Dave, if moderators can identify it - delete it. Either that, or create separate forum categories so the bots and blowhards can have at it with each other and those who don't care.

Thanks for all you do to keep the ship righted.....