There has been some discussion that ML algorithms are generally black boxes, with little or no insight into why certain inputs produce certain labels or outputs.
Does Reinforcement Learning produce algorithms that are any more understandable that the often opaque "computer says no" neural networks produced using other ML techniques?
Thanks
Don.
A feeble attempt to tell you about our stuff that makes us money
SKIP - a book about connecting industrious people with elderly land owners