Jump to content

News Forum - Machine anxiety: How to reduce confusion and fear about AI technology


Recommended Posts

In the 19th century, computing pioneer Ada Lovelace wrote that a machine can only “do whatever we know how to order it to perform”, little knowing that by 2023, AI technology such as chatbot ChatGPT would be holding conversations, solving riddles, and even passing legal and medical exams. The result of this development is eliciting …

The story Machine anxiety: How to reduce confusion and fear about AI technology as seen on Thaiger News.

Read the full story

AI is only as good as the data written into it. And now it becomes an issue of what the programmers beliefs are and how it is influencing the output. AI should be neutral in every respect to be successful.  That will never ever happen. And we are actually seeing the outcome from stories written by the new ChatGPT.  Some are actually quite embarrassing.

4 hours ago, MrJH said:

AI is only as good as the data written into it. And now it becomes an issue of what the programmers beliefs are and how it is influencing the output. AI should be neutral in every respect to be successful.  That will never ever happen. And we are actually seeing the outcome from stories written by the new ChatGPT.  Some are actually quite embarrassing.

 

As much as I see various expert discussions heading in the direction that AI is only as good as its programming, I would draw the reader's attention to the fact that AI developers have noticed what they call the Black Box.

From Techopedia: "Black box AI is any type of artificial intelligence (AI) that is so complex that its decision-making process cannot be explained in a way that can be easily understood by humans. Black box AI is the opposite of explainable AI (XAI)."

Neural networks can process data in such a complex way that humans have no chance of understanding what it did and how it reached the conclusions it did. Neural networks don't operate the same way as a normal computer with a step-by-step program that tells the machine what to do; this is both the power and danger of these systems - we don't have the capacity to understand what is going on.

Google freely admits that its AI does things they did not expect - one example is that it learned a language that it was never taught - Bengali. If it has taught itself a new language, what else has it been doing that we don't know about?

 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By posting on Thaiger Talk you agree to the Terms of Use