This post was written after reading an article from Bill Gtaes' blog - link.
Dear Bill
The potential dangers of AI is quite an interesting topic, and many people are trying to diagnose how it really is with threats - potential threats from new AI-based technologies. My concerns are quite justified in this regard. Why? It seems to me that some developers of solutions in the area of new technologies based on AI are not able to fully predict the consequences of their work. To explain this better, I will use an example.
The creators of the atomic bomb, when constructing this type of technology, anticipated its potential effects. Certainly, a lot of people wrote a lot of studies and reports on the potential effects of using this new technology, which were supposed to explain what dangers the use of the atomic bomb would bring. Certainly, there have also been a lot of studies, reports that have tried to bring to light the problem of whether the potential unwanted dangers of using the atomic bomb can be controlled.
Unfortunately, none of these studies and reports reflect the real consequences, the dangers of using the atomic bomb. Of all this, we could only find out after Hiroshima and Nagasaki. Only then did we find out how much the prepared reports on unwanted threats are worth. It seems to me that it's the same with AI-based technology - we don't know what the consequences of its use will be until after we implement the technology and make it widely available. Only then will we find out what our AI threat reports don't include and what it looks like in the real world, not on paper.
This is not about the risks themselves and the ability to manage them. If we use AI in an unreflective way, we may lose some of our humanity. Our sensitivity will be destroyed. Societies will begin to see individuals who can confidently be classified as the "cyber-idiots" generation. This means that slowly our civilization will lose certain attributes of humanity. We will not be able to develop tastes in the future, an opinion on what attracts us in art. These activities will slowly be taken over by AI. It will be AI that will introduce certain trends, generate suggestions as to what is in vogue and what is going out of fashion. We will lose the ability to decide our tastes.
Taking this a step further, it could mean that by relying on the "choices and suggestions" of artificial intelligence, humans may lose the desire for emotional development that undoubtedly accompanies the process of creating art. Such potential dangers are already noticeable, especially in education, where some students prefer to use ChatGPT to write an essay, rather than develop their writing skills. In a word, perhaps such solutions, which are supposed to create new challenges, will have a dysfunctional impact on societies' educational processes. This means that society will focus on "producing" recipients rather than developing leaders and creators. Art is only one area of influence in our lives, and other areas?
So, do we need to change our attitude towards new challenges - AI-based technologies? It seems that the process of education that can be carried out in virtual reality must undergo some changes to make it more effective and to learn people to use such solutions consciously. It should be like conscious, useful driving - I want to cover the distance to get to place A. It's not supposed to be driving for the sake of driving. Bill in your article you refer to this. With the development of automobiles, man has gained new skills. The same should touch AI solutions, so that they grow and evolve with us. This seems quite difficult, since the developers of these solutions themselves do not know exactly what to expect from their inventions.
Marek Ożarowski
Brak komentarzy:
Prześlij komentarz