Jump to content

News Forum - Machine anxiety: How to reduce confusion and fear about AI technology


Thaiger
 Share

Recommended Posts

In the 19th century, computing pioneer Ada Lovelace wrote that a machine can only “do whatever we know how to order it to perform”, little knowing that by 2023, AI technology such as chatbot ChatGPT would be holding conversations, solving riddles, and even passing legal and medical exams. The result of this development is eliciting …

The story Machine anxiety: How to reduce confusion and fear about AI technology as seen on Thaiger News.

Read the full story

Link to comment
Share on other sites

AI is only as good as the data written into it. And now it becomes an issue of what the programmers beliefs are and how it is influencing the output. AI should be neutral in every respect to be successful.  That will never ever happen. And we are actually seeing the outcome from stories written by the new ChatGPT.  Some are actually quite embarrassing.

Link to comment
Share on other sites

4 hours ago, MrJH said:

AI is only as good as the data written into it. And now it becomes an issue of what the programmers beliefs are and how it is influencing the output. AI should be neutral in every respect to be successful.  That will never ever happen. And we are actually seeing the outcome from stories written by the new ChatGPT.  Some are actually quite embarrassing.

 

As much as I see various expert discussions heading in the direction that AI is only as good as its programming, I would draw the reader's attention to the fact that AI developers have noticed what they call the Black Box.

From Techopedia: "Black box AI is any type of artificial intelligence (AI) that is so complex that its decision-making process cannot be explained in a way that can be easily understood by humans. Black box AI is the opposite of explainable AI (XAI)."

Neural networks can process data in such a complex way that humans have no chance of understanding what it did and how it reached the conclusions it did. Neural networks don't operate the same way as a normal computer with a step-by-step program that tells the machine what to do; this is both the power and danger of these systems - we don't have the capacity to understand what is going on.

Google freely admits that its AI does things they did not expect - one example is that it learned a language that it was never taught - Bengali. If it has taught itself a new language, what else has it been doing that we don't know about?

 

Link to comment
Share on other sites

What people are mislabelling as AI isn't even AI, it's machine learning. Not intelligence. It isn't like terminator Skynet, or HAL. Its less intelligent than insects

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...

Important Information

By posting on Thaiger Talk you agree to the Terms of Use