#597939
I think we can safely say: The topic of A.I., especially those fueling speech models like chatGPT, has hit the heart of mankind.
It seems to be everywhere, and everywhere, such systems are promising more work can be done using them.
I am not just an M.D. (with a particular and deep understanding of neurophysiology), I am also a trained IT professional (underwent my training more than 20 years ago) who, despite his medial work, follows the developments in the IT field very closely.
From the standpoint of neurophysiolgy AND information technology, I can safely say that the quality of results provided by all the A.I. systems currently in use are not great.
In some areas, where it is purely about pattern recognition, a clear benefit can be shown, but only in SOME.
I however do not understand how people can be that much trusting.
- A.I. Systems find correlations or causalities where people do not
- whether they are accurate however, we do not know, because we do not know HOW the A.I. system has found them
- even considering that we do not understand how A.I. systems reach their conclusions and we do not try to validate those findings in most cases, we trust them, without asking
So we use a system we do not understand, that provides us with data we do not understand and trust it, without tryint to validate.
Is it only me who thinks that this is a really bad idea?
It seems to be everywhere, and everywhere, such systems are promising more work can be done using them.
I am not just an M.D. (with a particular and deep understanding of neurophysiology), I am also a trained IT professional (underwent my training more than 20 years ago) who, despite his medial work, follows the developments in the IT field very closely.
From the standpoint of neurophysiolgy AND information technology, I can safely say that the quality of results provided by all the A.I. systems currently in use are not great.
In some areas, where it is purely about pattern recognition, a clear benefit can be shown, but only in SOME.
I however do not understand how people can be that much trusting.
- A.I. Systems find correlations or causalities where people do not
- whether they are accurate however, we do not know, because we do not know HOW the A.I. system has found them
- even considering that we do not understand how A.I. systems reach their conclusions and we do not try to validate those findings in most cases, we trust them, without asking
So we use a system we do not understand, that provides us with data we do not understand and trust it, without tryint to validate.
Is it only me who thinks that this is a really bad idea?
1E Player since 1995. 2E Player since 2007.
Dominion Article Series Thread | Delta Quadrant Article Series Thread | Trade Thread
Dominion Article Series Thread | Delta Quadrant Article Series Thread | Trade Thread