Win a copy of Zero to AI - A non-technical, hype-free guide to prospering in the AI era this week in the Artificial Intelligence and Machine Learning forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Liutauras Vilda
  • Paul Clapham
  • Bear Bibeault
  • Jeanne Boyarsky
Sheriffs:
  • Ron McLeod
  • Tim Cooke
  • Devaka Cooray
Saloon Keepers:
  • Tim Moores
  • Tim Holloway
  • Jj Roberts
  • Stephan van Hulst
  • Carey Brown
Bartenders:
  • salvin francis
  • Scott Selikoff
  • fred rosenberger

AI/ML gone wrong?

 
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Are you aware of any research done/efforts made to find ways to potentially abuse AI/ML? Would you agree to the possibility of exploitation, catastrophes it can create if not designed/planned correctly?
 
Greenhorn
Posts: 4
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
That's a really intriguing question, and great food for thought.  Sorry, then, if I'm going to be semi-silly, for the fun and education of it.

I'm first drawn to all of the sci-fi explorations of the topics of AI abuse:

There is at least one Star Trek Next Gen where the topic is how Cmdr. Data is essentially abused / belittled as an entity, and the topic is "how self-aware AI's have rights".  Actually, several episodes explore the concept.

The usual "abuse" is what the AI/ML metes out upon its stupid and limited creators (Colossus, the Forbin Project; Terminator; etc.).  There is serious discussion out there of what happens at "the Singularity" when machine processing exceeds the speed of human brain processing.  Several very serious names have weighed in at various times.  (e.g. Stephen Hawking)  Others rather pooh-pooh and just say "Don't you want safer cars?"

Your point is the AI/ML area is complex enough there HAVE to be exploits to mess any created system up, same as any other complex computer program / system.  AI/ML is being pushed "everywhere" for "everything" -- what are the implications?  We know ourselves well enough that if it CAN be broken, someone WILL try and WILL succeed.  Then what?  Just bad beer?  Or autonomous weapons that turn on their "owners"?

The educational part (and some continued silliness):  there are some forms of ML self-training where the overall algorithm gets "slapped on its fingers" by feedback loop when it makes a mistake.  Well, at least that's the colorful language used in one of 3Brown1Blue.com's explanations of M/L deep learning.  Might that be considered by some to be AI abuse?
 
Greenhorn
Posts: 17
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Maybe you've heard about a bot from Microsoft that after one hour learning from Twitter became a Nazi? https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

It's an old story, but Microsoft learned to be careful the hard way. Now they are doing a lot of research to prevent that from happening again
 
preeti vivek gupta
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Alexey Grigorev wrote:Maybe you've heard about a bot from Microsoft that after one hour learning from Twitter became a Nazi? https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

It's an old story, but Microsoft learned to be careful the hard way. Now they are doing a lot of research to prevent that from happening again




Thanks. Will check it out.
 
preeti vivek gupta
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

R. Wayne Arenz wrote:That's a really intriguing question, and great food for thought.  Sorry, then, if I'm going to be semi-silly, for the fun and education of it.

I'm first drawn to all of the sci-fi explorations of the topics of AI abuse:

There is at least one Star Trek Next Gen where the topic is how Cmdr. Data is essentially abused / belittled as an entity, and the topic is "how self-aware AI's have rights".  Actually, several episodes explore the concept.

The usual "abuse" is what the AI/ML metes out upon its stupid and limited creators (Colossus, the Forbin Project; Terminator; etc.).  There is serious discussion out there of what happens at "the Singularity" when machine processing exceeds the speed of human brain processing.  Several very serious names have weighed in at various times.  (e.g. Stephen Hawking)  Others rather pooh-pooh and just say "Don't you want safer cars?"

Your point is the AI/ML area is complex enough there HAVE to be exploits to mess any created system up, same as any other complex computer program / system.  AI/ML is being pushed "everywhere" for "everything" -- what are the implications?  We know ourselves well enough that if it CAN be broken, someone WILL try and WILL succeed.  Then what?  Just bad beer?  Or autonomous weapons that turn on their "owners"?

The educational part (and some continued silliness):  there are some forms of ML self-training where the overall algorithm gets "slapped on its fingers" by feedback loop when it makes a mistake.  Well, at least that's the colorful language used in one of 3Brown1Blue.com's explanations of M/L deep learning.  Might that be considered by some to be AI abuse?



Thanks.  Like the way you explain it.
 
Do Re Mi Fa So La Tiny Ad
the value of filler advertising in 2020
https://coderanch.com/t/730886/filler-advertising
reply
    Bookmark Topic Watch Topic
  • New Topic