AI and sailing decision making.

May 29, 2018
599
Canel 25 foot Shiogama, japan
Greetings from Japan.
I went out a couple of weeks ago on the tail of a typhoon.
Not too far and a fun ride.
However on the way back in the Coast Guard (small ) cutter met us and hailed us.
Told us it was a no no to be out in those conditions.
The port was full of (large) fishing vessels.

we had planned to go out yesterday, but there was an earthquake in Kamchatka.
So I bailed. I didn't want another run in with the Coast Guard!
One of my crew consulted Chat GTP about the safety of going out.
This was the reply.




Interesting strategy I tried today: I asked ChatGPT for a summary and got a beautifully concise account of the whole thing.
Well, apparently, Kamchatka just had the 6th biggest quake in the world ever.

Aftershocks expected for the next 5 weeks or so.
Mixed news from the coast: TV says big damage to aquaculture (shellfish?), & no reason not to believe that. Apparently damage in the Sendai Bay area, too. The waves might not be much to look at but that's deceptive for what goes on under water.
Asked about risks, ChatGPT advises that:

tsunami advisories have been lifted by Japan Met. Agency
sailing should be possible
there may be residual surges & strong currents but should be mostly subdued by tomorrow
aftershocks should not cause problems around here
proceed with caution if you go out, & keep in touch with latest news
. . . if you trust ChatGPT (I'm using a freebie version) . . .
Cheers
Ian

 
Apr 25, 2024
566
Fuji 32 Bellingham
Looks like a fun day on the water.

(The rest of this response has nothing to do with sailing and goes way into the weeds ... sorry ... )

Short Version: ChatGPT is probably right, in this case, but it is not capable of offering reliable advice. It is not that it doesn't work well. It is that it was not designed to do that job.

Really Long Version ...

As for ChatGPT (and similar generative AI systems), it is important to keep in mind that the system does not understand the information it conveys. It simply produces language that is meant to sound like a satisfactory response to the prompt. Part of that approach means that it will tend to agree with the user and tend to confirm the biases explicit or implicit in the user's prompt(s).

ChatGPT isn’t a truth or advice machine. It’s a language pattern machine. It is really easy to forget that because it is very convincing and often correct/accurate/helpful. I have a PhD in Linguistics and work with LLMs, and I still forget this from time to time. It is a really good illustration of how humans are so easily fooled by someone if they simply sound like they know what they are talking about. ChatGPT is designed to sound like it knows what it's talking about. The calculus of the undertaking was that if it was really good at sounding like it was correct, then it would tend to be correct, and that is generally true. Unlike humans, it does not attempt to sound correct while knowing it is not.

But, when you ask it a question, it doesn’t check facts or reason things out like a person would. Instead, it predicts what words are likely to come next, based on patterns it learned from huge amounts of text. That means:
  • It can sound very confident and reasonable while being completely wrong.
  • It tends to agree with how you frame the question, because that’s the most frictionless continuation of the pattern of the conversation.
  • It doesn’t “know” anything or care about accuracy - its goal is to produce a response that looks like something a helpful expert would say.
Now, as it happens, since it is trained on a truly massive amount of text that is mostly reliable, when it generates a reasonable and convincing response, it tends to also be correct. That works well for questions like, "What is the world's largest manmade structure?" Even though it doesn't actually understand the question, it can generate a very reasonable response and, based on what it has read, is likely to generate a pattern that results in a true statement.

But, it does not work well for questions that require some novel judgement based on unique conditions. All it can do is, in effect, tell you what you want to hear.

So, one word of caution when reading something ChatGPT said is to also look at the entire discussion and prompts. The user establishes the first part of a linguistic pattern with the prompt/question, and ChatGPT simply continues that pattern.

Its answers don’t exist on their own—they’re shaped by the prompt you gave it.

When ChatGPT responds, it’s not pulling from a fixed source or a decision-making algorithm. It’s just continuing your text with something that statistically fits.

It gets interesting when you talk about a recent current event. In this case, ChatGPT will not rely solely on text it was previously trained on. It will do some active web searches to do some of the leg work that a prudent user might do. As a result, it is basing its response largely on just a few sources of information from a handful of web searches and really just reporting what it read, in a way.

With this event, for example, it is going to look at news sources as a likely source of current event information. So, these sources will confirm that, for example, advisories/watches/warnings are lifted from various areas. Since these sources are not talking about conditions relevant to sailors - mostly just coastal inhabitants - there is no current event information that suggests that sailing isn't a good idea.

As it happens, that seems to be a good assessment, in most areas, but not because it is based on complete information or sound judgement. It is based on early and incomplete information that lacks anything to contraindicate sailing.

I just tried asking similar questions and it gave me similar advice. It even said that areas that were heavily impacted were already probably safe for sailing. So, I mentioned that I was concerned about floating debris in these areas and it confirmed that was an elevated risk, but only after I brought it up. It was not something it came up with on its own, for reasons I mentioned.
 

colemj

.
Jul 13, 2004
642
Dolphin Catamaran Dolphin 460 Mystic, CT
If the part about the advisories being lifted was factually true, then the answer provided seems reasonable and benign. It told you it should be possible to sail, and that you should proceed with caution and stay updated with current news on the subject (I'm implying it didn't mean all news).

Doesn't sound dangerous, and I wouldn't expect it to go into the weeds of floating debris, contaminated water, or all the other possibilities. Those would be contained in current news, and any mariner should be aware of those possibilities.

People plan passages based on computer weather models, but don't expect them to be correct in all ways, up-to-date on extreme local effects and conditions, nor do they place reasonable blame on them if they aren't exact.

Mark
 

jssailem

SBO Weather and Forecasting Forum Jim & John
Oct 22, 2014
23,232
CAL 35 Cruiser #21 moored EVERETT WA
“We’ve only just begun”. At least that is the way the song goes.
 
Nov 22, 2011
1,253
Ericson 26-2 San Pedro, CA
Thank you, Foswick, for this really excellent and concise summary. While what you have written has been my understanding generally of how these LLMs work, your explanation is really clear and to the point. It's quite good. I've bookmarked it so I can share it with others who seem to be confused about the limitations as well as the legitimate uses of AI.

Much appreciated.
 
  • Like
Likes: jssailem