Making authoritative statements about areas which one has little knowledge of is commonplace, particularly on the internet. (Pope’s “A little knowledge is a dangeous thing” comes to mind – though the original quote is “learning” rather then “knowledge”). Can AI help here – or does AI make matters worse?
It is now exceptionally easy to apply AI to many problems: Google (and others) have made tools for using AI extremely easy to use, and Google’s search engines now provide an AI summary right at the top of most searches. But using AI to interpret data where one does not have a detailed knowledge of the overall area that the data comes from, or were one lacks understanding of how the data was produced, and of implicit bias in the data makes the user liable to accept AI’s misinterpretation of the data.
In doing so, this will strengthen whatever (inappropriate) views the naive user already had. Dunning-Kruger raises its head again.
Another old version of this is a proverb (possibly Arabian, certainly a few hundred years old at least)
He who knows not, and knows not he knows not, is a fool; shun him.
He who knows not, and knows he knows not, is simple; teach him.
He who knows, and knows not he knows, is asleep; awaken him.
He who knows, and knows he knows, is wise; follow him.
Of course, it may be that this post is itself an example of the Dunning-Kruger effect! (Obviously, I don’t think so, or I wouldn’t be posting it, but stil…)