AI and the SBO Forums-- Part 2 Community Expectations

Jan 11, 2014
13,594
Sabre 362 113 Fair Haven, NY
Is this an acceptable model for AI on the SBO forum? What is a model that will work for this forum?
I think this has been made abundantly clear. If a member posts content that is in whole or part AI generated it should be acknowledged as such. Even when those formulaic 11th grade opening paragraphs are posted, the work should be noted as AI generated.
 

dLj

.
Mar 23, 2017
4,841
Belliure 41 Back in the Chesapeake
While this article is "food for thought," I am unclear concerning its relevance to AI use on this forum.
My intention was simply aimed at "food for thought".

The decision of how this forum will allow AI use has already been decided. I happen to fully agree with what that decision is.

dj
 
Nov 21, 2012
800
Momentarily Boatless Port Ludlow, WA
I have been using AI on occasion. It is sometimes lazy and gives half considered answers. When corrected, it cheerfully acknowleges the mistake and moves on. It is incapable of being embarassed by its own lack of attention or accuracy.

AI provided a detailed financial analysis of a problem ahead of a meeting with my financial planner. Though it took several hours of refining the prompts, I was better prepared for the meeting as a result of the AI analysis.

On the other hand, I asked for a spreadsheet of all the programming parameters of an Arco Zeus voltage regulator, and it delivered about a quarter of them. Of those, about a third were hallucinations. I should check if it filled in the gaps in the couple of months since I made the inquiry. I prompted that it had its head up its wazoo, which it also cheerfully acknowledged. I wasn't expecting much, and it met that expectation.
 
Apr 25, 2024
738
Fuji 32 Bellingham
My wife is a school teacher - high school English. So, it is no surprise that a high percentage of assigned essays are obviously AI-generated, and a larger chunk probably are. The problem, of course, is that it is difficult to prove and problematic to try.

So, I created a tool that uses generative AI, not to detect the problem but to outsmart it. She uploads the assignment's rubrik, then uploads student essays for that assignment. The system analyzes each essay, considers the rubrik and age group, then generates a short quiz that tests the student's knowledge and understanding of the essay they claim to have written. This is a short in-class quiz. The quiz is designed to focus on questions that would identify academic dishonesty - such as asking them to define an advanced word that they probably did not come up with on their own. (Of course, she needs to grade these quizzes, but that is a drop in the bucket compared to the effort to read and grade an essay that the student didn't write.)

She doesn't have to go through the drama of accusing students who might or might not have cheated. Instead, the quiz simply ensures that everyone who turns in an essay, at least understands the essay they claim to have written. Regardless of what tools a student did or didn't use, the standard is the same - understand what your turned in. This is the minimal threshold for having your essay looked at and graded - AI or not.

My point is that generative AI changes many things. You can either shout at the wind about it or you can figure out how to adjust and thrive. Most of the "problems" that people complain about are only problems if you insist on doing and viewing things the same way you always have. People will be using AI to write and, and skills in that area will atrophy. That seems to be a given.

But, literacy and the printing press killed the oral tradition. The ability to memorize and recite long passages is all but gone, despite thousands of years of use. But, what have we gained? We don't bemoan the loss of the oral tradition, much, because we grew up in a time when that was already gone. In 100 years, this controversy, too, will be settled and this conversation will be humorous to our descendants. I'm not predicting the future. Things could go any number of ways. But, I can say with some certainty that each new generation will evolve and rise to the challenges of their generation and that older generation will be sure that they won't.

I just read an article in the Washington Post about an unexpected side effect of generative AI. They were observing that companies are suddenly starting to value experience more than youth - reversing a trend that has been a real problem for a long time. Generative AI is most effective at replacing entry-level workers. Companies are finding that they, therefore, are better off hiring a few people with experience who are comfortable using AI tools. That is great news for people with experience who don't reject AI tools. It is bad news for Gen-Z and younger - which was kind of the point of the article.
 
Jun 17, 2022
424
Hunter 380 Comox BC
A

I understand the concern. But like calculators, electronic typewriters, and computers before it, AI is still a tool — the human remains part of the equation.

If I ask my AI agent a question and you ask yours the same question, we will not get identical answers. The output reflects the user: how the question is framed, what sources are emphasized, how the response is challenged, refined, and verified. Over time, an agent learns the style, depth, and standards its user expects. That influence is real.

This isn’t fundamentally different from many accepted practices. Academic papers routinely list a single author, even though graduate students or collaborators may have done much of the underlying research, drafting, or data preparation. Authorship implies responsibility, not solitary labor.

Ultimately, I am accountable for what I post under my name. Whether the words come directly from me, from a dockside conversation yesterday, or from information I extracted and refined through AI using my own instructions, the responsibility is the same.

If someone simply copies and pastes generic AI output, it will be obvious and deserves criticism. But if a person carefully frames the question, challenges the response, validates the information, and integrates it thoughtfully, that is work — and it reflects judgment, not automation.

===============
B

I hear the concern. But like calculators, electronic typewriters, computers and now AI, the human is still part of the equation. If I ask my AI agent a question and you ask your AI agent the same question, we will 100% get different responses. Through my interactions and my directions, my agent knows what I expect in terms of response style, research sources, etc...

It's no different than someone publishing a paper to a scholarly journal, for which 5 or 6 grad students made the majority of the actual work.

At the end of the day, I have to own up to the posts under my name. Does it really matter if they are my exact words, repeating what a friend told me at the dock yesterday or what I extracted out of AI through my particular instructions?

If I simply cut and paste AI gibberish, it will be obvious. If I've framed the query carefully, challenged the response with further refinements and questioned and verified the accuracy, is that not work I've done ?
========================
1. Which of the above is my original words;
2. Which is AI derived?; and
3. Which is easier to read?
 
Last edited:

jssailem

SBO Weather and Forecasting Forum Jim & John
Oct 22, 2014
23,931
CAL 35 Cruiser #21 moored EVERETT WA
Hi Kappy. Your vote for the first one,
Is that a vote for:
  1. Original thought
  2. AI derived
  3. Easier to read
 

jssailem

SBO Weather and Forecasting Forum Jim & John
Oct 22, 2014
23,931
CAL 35 Cruiser #21 moored EVERETT WA
I found the first passage easier to read. I found it logical in context, like higher level cognitive processing of connected ideas.

The second passage appeared as a forced folksy message of the same context.

I suspect the original ideas, concepts were yours but that AI may have participated in polishing the two formats.
 
Apr 25, 2024
738
Fuji 32 Bellingham
My guess is that you wrote B and had generative AI (ChatGPT, if I had to guess) revise for clarity, flow, etc. There are some telltales that suggest ChatGPT specifically. I think the point is made, though. The thoughts are yours but you used a tool to polish how they are presented. There might also have been some previous back-and-forth to help you refine your point.

That refinement is important and a really valid use of AI. Not all models are equally useful at this. ChatGPT, for example, tends to agree and be sycophantic and is easily made to hallucinate in order to create minimal friction with the user's position. Hands down, Claude.ai is the best performer, in this regard. It is not perfect, but it is trained specifically to admit it doesn't know something and to adhere more closely to supported knowledge. I frequently use it for, "My idea is ____. Tell me what is wrong with that idea." It helps me consider other angles more objectively, without egos involved with people taking a position and trying to be "right".
 
Sep 17, 2022
161
Catalina 22 Oolagah
My guess is that you wrote B and had generative AI (ChatGPT, if I had to guess) revise for clarity, flow, etc. There are some telltales that suggest ChatGPT specifically. I think the point is made, though. The thoughts are yours but you used a tool to polish how they are presented. There might also have been some previous back-and-forth to help you refine your point.

That refinement is important and a really valid use of AI. Not all models are equally useful at this. ChatGPT, for example, tends to agree and be sycophantic and is easily made to hallucinate in order to create minimal friction with the user's position. Hands down, Claude.ai is the best performer, in this regard. It is not perfect, but it is trained specifically to admit it doesn't know something and to adhere more closely to supported knowledge. I frequently use it for, "My idea is ____. Tell me what is wrong with that idea." It helps me consider other angles more objectively, without egos involved with people taking a position and trying to be "right".
I'm into this a bit late but, I don't believe that AI should be used at all. AI is a trap. If, I understand what you are saying, you are asking AI to help you consider other angles more objectively. Respectfully, I believe that you, and not AI should consider all the angles to help you objectively come to your own conclusions. I have a lot more faith in you and your opinion than in AI.

Critical thinking is an important part of the "human equation'. Before I joined this forum I spent several months reading past posts by members and verifying that the folks here were not only knowledgeable but helpful as well. During that research period, I gained a lot of knowledge and it helped me greatly in formulating my questions and reaching out privately to members with specific questions based on their published experience. I hope this makes sense. AI is being pushed on us both as a society and as individuals. That in itself should raise every hackle on our collective bodies.

George
 
Apr 25, 2024
738
Fuji 32 Bellingham
If, I understand what you are saying, you are asking AI to help you consider other angles more objectively. Respectfully, I believe that you, and not AI should consider all the angles to help you objectively come to your own conclusions.
Unfortunately, I don't always think of every angle. I have biases and blindspots. Generative AI only "knows" what it's read - which is WAY more than what I have read. So, it has been exposed to more views and ideas than I have, on many subjects. In the end, I need to evaluate whether what it says is true or worth considering. But, just as I read other author's views and evaluate what weight to give those views, I do the same with generative AI. Sometimes it makes good points worth considering. Sometimes it doesn't. Same as humans.

I frequently read something someone wrote and I think, "Hmm ... they might be onto something" or "Hmm ... they have no idea what they're talking about." The danger isn't in using AI. The danger is in blindly accepting what it says. But, the same could be said for humans.

It's way to long to paste here, but I've attached a conversation I had with Claude.ai on the subject of HMPE for running rigging. (My text is blue - Claude's is black.) I have a certain view and I was not convinced that I had thoroughly considered the subject. (I'm still not certain I have.) If you labor through the whole conversation, you will see how I used the tool to "think out loud". In the end, the summary I asked it to produce just includes my thoughts/conclusion, but it is much more cohesive by the time we talk it through.

I am not taking the position that the summary is right or wrong. I would say that it is still incomplete and has some logical laziness. But, it is a product of my views and experience. I don't think there is anything in that summary that didn't come from me or at least represent a position that I explicitly agreed to.

That kind of back and forth conversation isn't easily had with another person. Egos get in the way. People try to be "right" or have other agendas. Claude's agenda was to help me refine my thinking on this and, at times, to challenge it. (I think its challenges were weak. There are actually better arguments that I am wrong. But, that's fine - it still helped me think it through a bit.) If I cared, I would criticize parts of the summary and continue to refine it. But, as it is, that is about as far along in my thinking as I am and I don't think Claude has any real data to offer. So, that conversation has pretty much run its course.
 

Attachments