• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Paul Clapham
  • Ron McLeod
  • Bear Bibeault
  • Liutauras Vilda
Sheriffs:
  • Jeanne Boyarsky
  • Junilu Lacar
  • Henry Wong
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Jj Roberts
  • Tim Holloway
  • Piet Souris
Bartenders:
  • Himai Minh
  • Carey Brown
  • salvin francis

TensorFlow 2.0 in Action: Computer Says No

 
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Thushan.

What techniques can we use to understand the often opaque "computer says no" neural networks produced using ML?
For example if a bank uses ML to decide whether to agree a loan, a customer who is rejected may wish to know what was the main factor behind the rejection.

Is it a case of taking the customer's data and re-trying the ML algorithm on several small variations of the customer's data to see if the loan would have been approved?
For example asking the ML to approve the same loan but pretending that the customer has a higher salary, or that the customer's postcode is different?


Thanks
Don.
 
Author
Posts: 24
2
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hey Don,

What you're referring to here is lack of model interpretability. Deep neural networks are notorious for this.  This is one reason institutions that deal with customers (e.g. banks /insurance), like to stick to simple models like linear regression. If inputs are correctly normalized, the weights of LR can be used to explain the importance of features. The method you described (changing input features and analysing the output is also another way - known as partial dependency plots).

However better techniques have emerged to explain more complex models. For example, Shapley values is one of the most popular right now (https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf).

In this book I pay special attention to model intepretability when it's needed. For example, in Chapter 06 I talk about a technique called GradCAM which is a good interpretation technique for image classification models.
 
Don Horrell
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Thank you for those thoughts.
I have also just found this:
https://christophm.github.io/interpretable-ml-book/intro.html

Lot's to think about.


Cheers
Don.
 
You ought to ventilate your mind and let the cobwebs out of it. Use this cup to catch the tiny ads:
Thread Boost feature
https://coderanch.com/t/674455/Thread-Boost-feature
reply
    Bookmark Topic Watch Topic
  • New Topic