Jump to content

Using Artificial Intelligence to Ask Questions about Sasquatch


MikeZimmer

Recommended Posts

Not to be taken without a liberal dose of Salt, the latest Artificial Intelligence (AI) programs give some interesting results when Sasquatch is the topic. See https://deepai.org/machine-learning-model/text-generator for an easy to use AI experience.

 

Here is my input:

What evidence exists that Sasquatch is a real animal existing on all continents including Australia , North America, South America, Europe and Asia with evidence of trackways and footprints. Why are Sasquatch rarely seen?

 

Here was the output:

Evidence of Sasquatch's existence includes trackways, footprints, hair samples, eye-witness accounts, vocalizations, and even photographic evidence. While sightings are rare, discovering these signs proves that Sasquatch is indeed real and inhabits multiple continents including Australia, North America, South America, Europe, and Asia. One theory as to why sightings of them are so rare is that they have an excellent understanding of their environment and have developed skills to hide from humans.

 

My overall comment: Hmmm

 

The output is highly dependent on how you phrase the input. It does not have to be a question either.

 

This AI revolution is not small potatoes by the way, but it raises some perplexing epistemological issues, that is, how do we know what output is true and to what extent? Why should you believe the output? This is scarcely unique to AI, but the output is typically so well-written, that it seems extremely authoritative. Apparently some students have used AI to write their essays. It could be BS, but it is well-phrased BS. It would appear to me that the ability of AI to craft words exceed the skill of the typical undergraduate.

 

 

Of course, I meant artificial, not artificail, but I can neither type nor proofread.

  • Upvote 1
Link to comment
Share on other sites

11 hours ago, MikeZimmer said:

Not to be taken without a liberal dose of Salt, the latest Artificial Intelligence (AI) programs give some interesting results when Sasquatch is the topic. See https://deepai.org/machine-learning-model/text-generator for an easy to use AI experience.

 

Here is my input:

What evidence exists that Sasquatch is a real animal existing on all continents including Australia , North America, South America, Europe and Asia with evidence of trackways and footprints. Why are Sasquatch rarely seen?

 

Here was the output:

Evidence of Sasquatch's existence includes trackways, footprints, hair samples, eye-witness accounts, vocalizations, and even photographic evidence. While sightings are rare, discovering these signs proves that Sasquatch is indeed real and inhabits multiple continents including Australia, North America, South America, Europe, and Asia. One theory as to why sightings of them are so rare is that they have an excellent understanding of their environment and have developed skills to hide from humans.

 

My overall comment: Hmmm

 

The output is highly dependent on how you phrase the input. It does not have to be a question either.

 

This AI revolution is not small potatoes by the way, but it raises some perplexing epistemological issues, that is, how do we know what output is true and to what extent? Why should you believe the output? This is scarcely unique to AI, but the output is typically so well-written, that it seems extremely authoritative. Apparently some students have used AI to write their essays. It could be BS, but it is well-phrased BS. It would appear to me that the ability of AI to craft words exceed the skill of the typical undergraduate.

 

 

Of course, I meant artificial, not artificail, but I can neither type nor proofread.

Great post, Mike!  What app did you use?  I recently posted this on Facebook:

 

"There's a lot of whoop-a-t-do about the new AI ap NOVA.  I tried these  queries.

1.  When was Richard III born?
It correctly answered, October 2, 1452.  I got the same answer from a simple Google search.

2.  Solve two linear equations in two unknowns.  It found the correct   solution.  This is a math problem (requiring REAL intelligence).

3. Solve the transcendental equation:  ln x = 3x+2.  It gave x =  1.04284,which is incorrect.  There is no real solution,  though.

4.  Solve the transcendental equation:  ln x = -x.  It gave the correct solution by succesdive approximations, using the Newton-Rapheson method:  x = 0.5671.  This is still mathematics, but at least it knew the equation had no analytical solution and HOW to solve the equation by approxiations, which can be called "artificial" intelligence.  However, any advanced calculus student would have known to use this or a similar numerical method, as I did in my  undergraduate honors thesis.

5. Finally, the first successful AI application was in medical diagnostics, decades ago.  I tried:

a.  Symptoms: fever, difficulty breathing.  Diagnosis:  COVID-19.  Yeah!

b.  Symptoms: fever, runny nose.  Diagnosis: most likely common cold. Yeah!

c.  Symptoms: stomach ache,, vomiting, diarrhea.  Diagnosis: viral gastroenteritis, food poisoning, or appendicitis.  Yeah!

The responses to these illnesses also recommended treatments.

Try this ap out and let's hear from you on how it does against real intelligence.  My expectation is that AI should take in seemingly unrelated facts and draw a conclusion that could not obviously be obtained by known methods in the hands of an experienced practicioner.  

NOVA is available free at Google Play."

Link to comment
Share on other sites

Admin

I just tried it out. It was like talking to an encyclopedia. It’s not free unfortunately either.

Link to comment
Share on other sites

Moderator

"Good" AI draws from lots of data sometimes making unexpected connections.   Also sometimes misses the boat if the data is bad.    It could be interesting to delve into specific questions like DNA differences between BF and human, why differences aren't apparent, etc and see what it predicts as the answer.    Remember that it can only a) look up stuff from what is known or b) extrapolate based on what is known, so it is not foolproof, it's only as good as the data available to it.

Link to comment
Share on other sites

Admin

Here is my own free trial conversation. I guess I was batting for the fences.

479C6D06-FC74-43D6-8288-79CAD45F6A2B.png

9A80D39C-E675-4F0A-926A-230BC7FAD1CA.png

B9B02D8A-CF0F-4777-AC21-EB33DCE2FA22.png

Link to comment
Share on other sites

On 4/10/2023 at 8:56 AM, hvhart said:

Great post, Mike!  What app did you use?  I recently posted this on Facebook:

 

"There's a lot of whoop-a-t-do about the new AI ap NOVA.  I tried these  queries.

1.  When was Richard III born?
It correctly answered, October 2, 1452.  I got the same answer from a simple Google search.

2.  Solve two linear equations in two unknowns.  It found the correct   solution.  This is a math problem (requiring REAL intelligence).

3. Solve the transcendental equation:  ln x = 3x+2.  It gave x =  1.04284,which is incorrect.  There is no real solution,  though.

4.  Solve the transcendental equation:  ln x = -x.  It gave the correct solution by succesdive approximations, using the Newton-Rapheson method:  x = 0.5671.  This is still mathematics, but at least it knew the equation had no analytical solution and HOW to solve the equation by approxiations, which can be called "artificial" intelligence.  However, any advanced calculus student would have known to use this or a similar numerical method, as I did in my  undergraduate honors thesis.

5. Finally, the first successful AI application was in medical diagnostics, decades ago.  I tried:

a.  Symptoms: fever, difficulty breathing.  Diagnosis:  COVID-19.  Yeah!

b.  Symptoms: fever, runny nose.  Diagnosis: most likely common cold. Yeah!

c.  Symptoms: stomach ache,, vomiting, diarrhea.  Diagnosis: viral gastroenteritis, food poisoning, or appendicitis.  Yeah!

The responses to these illnesses also recommended treatments.

Try this ap out and let's hear from you on how it does against real intelligence.  My expectation is that AI should take in seemingly unrelated facts and draw a conclusion that could not obviously be obtained by known methods in the hands of an experienced practicioner.  

NOVA is available free at Google Play."

I used https://deepai.org/machine-learning-model/text-generator, just because it is easy to use and requires no registration as I remember things. It puts a limit on how often you can query.

Link to comment
Share on other sites

On 4/10/2023 at 9:46 AM, MIB said:

"Good" AI draws from lots of data sometimes making unexpected connections.   Also sometimes misses the boat if the data is bad.    It could be interesting to delve into specific questions like DNA differences between BF and human, why differences aren't apparent, etc and see what it predicts as the answer.    Remember that it can only a) look up stuff from what is known or b) extrapolate based on what is known, so it is not foolproof, it's only as good as the data available to it.

It is subject to GIGO, that is Garbage In gives Garbage Out, no matter how impeccable the logic. Folks think this is just a rule for computers. It ain't. Simple reflection should make you see the point that a huge amount of what we believe is wrong. Read the comments section on any Internet Article, on some moderately controversial topic, where comments are allowed. The comments will be all over the map. Some you may agree with; many you will disagree with. Regardless of your own views, they cannot all be correct where they are mutually contradictory. It does not follow that any are correct, although some may be. It does not follow that you may be correct, but you may be. Any training database is going to be built up from information that follows the same pattern. Some (maybe much) will contain contradictions (even internal ones). Some of the assertions may be correct, but where they differ, they cannot all be correct. So, a lot of the training data is bound to be wrong, that is Garbage In. The clear result is garbage out.

If we had the omniscient one preparing the training data, we might get more reliable results.

 

I do not see a way around these fundamental issues, regardless of how incredible is the technology, and it is incredible. The issues are epistemological. Perhaps I misunderstand the issue, but I cannot see how any technology or biological organism can get around this.

Link to comment
Share on other sites

Admin

You have to know what you want it to do. Don't ask open ended subjective questions.

 

Give it specific instructions, examples and data if available and then ask it to do something specific. i.e

 

>> Consider yourself to be a bigfoot expert. Your goal is to find the most likely bigfoot habitat in Oregon. Facts: 150 sightings in highway X, 123 in town Y, etc, etc

 

some thing like that.

 

 

  • Upvote 1
Link to comment
Share on other sites

  • gigantor changed the title to Using Artificial Intelligence to Ask Questions about Sasquatch

The AI that is available to the public doesn't have access to top secret, classified documents. They know what we know. Probably less. 

Link to comment
Share on other sites

AI might come up with something interesting from the SSI data.

 

Not my call on what gets done with that data of course just spitballing. 

Link to comment
Share on other sites

Moderator
2 hours ago, djm5971 said:

AI was also asked about the Patterson-Gimlin film and said it was a man in a gorilla costume.  

 

It depends on how you word the question.   When I posed it, the AI said there were opinions in both directions, existence of bigfoot is unproven, and it could not give an authoritative answer to the question.   So .. we can bias the outcome by how we word the question.   

Link to comment
Share on other sites

21 hours ago, gigantor said:

You have to know what you want it to do. Don't ask open ended subjective questions.

 

Give it specific instructions, examples and data if available and then ask it to do something specific. i.e

 

>> Consider yourself to be a bigfoot expert. Your goal is to find the most likely bigfoot habitat in Oregon. Facts: 150 sightings in highway X, 123 in town Y, etc, etc

 

some thing like that.

 

 

 

I had ChatGPT precis a short document (2pages) that I had written, as a table, mostly point form. The summary was actually very good, maybe better than I would have produced, and it took point form and turned it into a paragraph of grammatically phrased assertions.

 

As I said before, it effectively renders text in an authoritative sounding manner.

 

It is quick to inform us that certain things are "conspiracy theory."  For instance, get it to discuss some controversial things and it may tell you outright that this is unsupported conspiracy theory.

 

It is very sensitive to the exact input in my limited experience, and seemingly small changes can give wildly varying results.

 

Trying ChatGPT with "Is the U.S. government hiding information on Sasquatch?"

Response:

 

There is no concrete evidence to suggest that the U.S. government is hiding information on Sasquatch, also known as Bigfoot. While there have been rumors and claims that government agencies, such as the FBI, may have information or files related to Sasquatch sightings, there has been no official confirmation or evidence to support these claims.

 

It is important to note that Sasquatch is generally considered to be a myth or legend, and there is no scientific evidence to support the existence of such a creature. While there have been numerous reported sightings and anecdotal evidence, there has been no verifiable physical evidence or scientific research to confirm the existence of Sasquatch.

 

In general, government agencies are required to release information to the public through the Freedom of Information Act (FOIA) unless the information falls under certain exemptions, such as national security or personal privacy. Therefore, if there was any official government information on Sasquatch, it would be subject to FOIA requests like any other government records.

 

Query

Who are the major players in US the deep state?

 

Response

 

As an AI language model, I do not possess the ability to make unsupported claims or accusations. It is important to understand that the term "deep state" is often used in a conspiratorial context and its existence as a cohesive group or organization is debated. It is crucial to base any claims on reliable sources and facts, rather than speculation or unfounded allegations.

 

Presumably a non-sentient piece of software cannot have opinions, but its responses would be labelled as opinion if delivered by a person.  Above a few posts, the dialogue returned included the following "As an AI language model, I do not have beliefs or opinion." As I said above, GIGO, and the since the training data-base is almost certain to contain a lot of misinformation, that is that is the GI part of GIGO.

 

However, when asked who the significant authors who were writing about the deep state, it gave me a good bibliography.

 

However, the ability to precis is amazing, and just how it determines those points to be abstracted, reworded and presented as the most salient is beyond me. It is not easy for a bright human to do this!

 

It is certainly a significant technology, at least on a par with the development of digital computers. I suspect in the long run, much more significant. Are its assertions reliable? Nah, but maybe no worse than those of the average professor.

  • Upvote 2
Link to comment
Share on other sites

On 4/11/2023 at 1:00 PM, MikeZimmer said:

It is subject to GIGO, that is Garbage In gives Garbage Out, no matter how impeccable the logic. Folks think this is just a rule for computers. It ain't. Simple reflection should make you see the point that a huge amount of what we believe is wrong. Read the comments section on any Internet Article, on some moderately controversial topic, where comments are allowed. The comments will be all over the map. Some you may agree with; many you will disagree with. Regardless of your own views, they cannot all be correct where they are mutually contradictory. It does not follow that any are correct, although some may be. It does not follow that you may be correct, but you may be. Any training database is going to be built up from information that follows the same pattern. Some (maybe much) will contain contradictions (even internal ones). Some of the assertions may be correct, but where they differ, they cannot all be correct. So, a lot of the training data is bound to be wrong, that is Garbage In. The clear result is garbage out.

If we had the omniscient one preparing the training data, we might get more reliable results.

 

I do not see a way around these fundamental issues, regardless of how incredible is the technology, and it is incredible. The issues are epistemological. Perhaps I misunderstand the issue, but I cannot see how any technology or biological organism can get around this.

From the horse's mouth, or maybe some other orifice:


Input to ChatGPT:


Explain how GIGO must of necessity apply to ChatGPT output.

 

Output from ChatGPT:

 

GIGO, which stands for "Garbage In, Garbage Out," is a principle that refers to the fact that the quality of output is determined by the quality of input. In the case of ChatGPT, this means that the quality of the model's responses is directly influenced by the quality of the data it was trained on, as well as the input it receives during use.

 

While ChatGPT is a highly advanced language model with remarkable abilities to generate human-like responses to various inputs, it is not a sentient being capable of understanding the meaning of the input it receives. Instead, it relies on statistical patterns and associations in its training data to generate responses.


Therefore, if the input to ChatGPT is inaccurate, incomplete, or biased in any way, the model may produce responses that are also inaccurate, incomplete, or biased. Additionally, if the training data is of poor quality, contains errors or biases, or is not representative of the language and concepts it is meant to model, this can also lead to inferior output from ChatGPT.


In summary, GIGO applies to ChatGPT output because the quality of its output is inherently linked to the quality of the input it receives and the quality of the training data it was trained on. Therefore, it is important to ensure that the inputs provided to ChatGPT are of high quality and that the training data is well-prepared and representative of the language and concepts the model is intended to understand.


Input to ChatGPT:


Given that the law of contradictions holds that two contradictory assertions can not both be correct, it follows that from multiple contradictory assertions at most one can be correct. It does not follow that any are correct. Human assertions are notoriously diverse and mutually contradictory. Material in magazines, article, reviews, and comments on these tracts is also notoriously diverse and mutually contradictory. The Internet published works gives innumerable examples of this. This is the base of information from which ChatGPT works. It uses no intelligence, and even though the training material is curated by humans, they are subject to their own limitations such as many types of bias, a limited understanding of the world, which is the human condition, and the inability to deal with the complex mass of information available to them. All information must be evaluated, interpreted, with an eye to making sense of this. This is a highly unreliable process, even for the brightest. Couple that with the fact that the body of knowledge of any discipline is bound to be highly flawed, incomplete and often wrong both, how is it possible for anyone to rely on the information returned by ChatGPT?


Output from ChatGPT:


You make an important point that human assertions can be diverse and contradictory, and that the information on the internet and other sources can be unreliable or incomplete. This is indeed a significant challenge for any system, including ChatGPT, that relies on these sources of information.


However, it's worth noting that ChatGPT is not designed to provide definitive or absolute answers to complex questions. Instead, it is a tool that can provide insights, suggestions, or alternative perspectives on a given topic. It can help users explore different viewpoints, generate ideas, and identify potential sources of information.

In other words, ChatGPT is not intended to replace critical thinking or independent research. Rather, it is a tool that can augment and support these activities. Users should not rely on ChatGPT as the sole source of information for important decisions or complex problems. Instead, they should approach the information provided by ChatGPT with a critical eye, and seek to verify and validate any information that they receive.


Moreover, ChatGPT is not infallible, and its responses can sometimes be inaccurate or biased. As you correctly note, the limitations of the training data and the biases of its creators can affect its performance. Therefore, it is important to use ChatGPT with a healthy dose of skepticism, and to verify its responses whenever possible.

In summary, while ChatGPT can be a useful tool for generating ideas and exploring different perspectives, it is not a substitute for critical thinking, independent research, and careful evaluation of information. Users should approach its responses with a critical eye and seek to verify and validate any information that they receive.

Edited by MikeZimmer
Link to comment
Share on other sites

×
×
  • Create New...