Intellectual revolution. What ChatGPT heralds to humanity

Generative artificial intelligence is a philosophical and practical challenge on a scale not seen since the Enlightenment

The new technology claims to be able to transform the human cognitive process the most since the invention of printing. The body of knowledge that allowed Gutenberg’s Bible to be printed in 1455 made abstract human thought widely available and quick to transmit. But now a new technology reverses this process. If the printing press led to the flowering of modern human thought, then new technology brings it to sophistication and perfection. In the process, there is a gap between human knowledge and human understanding. If we are to successfully navigate this transformation, new concepts of human thinking and interaction with machines must be developed. This is the main challenge of the era of artificial intelligence.

The new technology is known as generative artificial intelligence; GPT stands for “Generative PreTrained Transformer” – a generative pretrained transformer. ChatGPT, developed in the OpenAI research lab, is now able to communicate with people. As its capabilities expand, they will redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society.

Generative artificial intelligence is a philosophical and practical challenge on a scale not seen since the Enlightenment. The printing press allowed scientists to quickly replicate each other’s research results and share them. The unprecedented consolidation and dissemination of information gave rise to the scientific method. What was previously unavailable has become a starting point to speed up the search. The medieval interpretation of the world, based on religious faith, gradually collapsed. The depths of the universe could be explored until the time when new limits of human understanding were reached.

Possibilities of artificial intelligence
Generative artificial intelligence also opens up revolutionary ways for the human mind and new horizons for unified knowledge. But there are categorical differences. Enlightenment knowledge was achieved gradually, step by step – and each step was tested and taught. AI systems start at the other end. They can store and process a huge amount of information, in the case of ChatGPT – most of the text material from the Internet and a large number of books, billions of titles. Keeping such a volume of information and processing it is beyond the limits of human capabilities.

Complex methods of artificial intelligence produce results without explaining why and how the process of acquiring them works. A GPT computer receives a request from a human. A trained machine responds with a competent text in a matter of seconds. It is able to do this because it has pre-generated representations about the huge data it was trained on. Because the process by which she created these representations was developed by machine learning, which maps patterns and connections across vast amounts of text, the exact sources and reasons for the specific features of a particular representation remain unknown. It is also unknown how a trained machine stores its knowledge, processes it and retrieves it. The mystery associated with machine learning will challenge human cognition for the foreseeable future – because it is not known whether this process will ever be revealed.

The capabilities of artificial intelligence are not static, they expand exponentially with the development of technology. Recently, the complexity of artificial intelligence models has been doubling every few months. Therefore, generative artificial intelligence systems have capabilities that remain undiscovered even for their inventors. With each new artificial intelligence system, they create new capabilities without understanding their origin and purpose. As a result, our future now contains an entirely new element of mystery, risk, and surprise.

Enlightenment science accumulated certainty – the new artificial intelligence generates cumulative uncertainty. Enlightenment science developed by making mysteries clear, delineating the ever-changing boundaries of human knowledge and understanding. Both faculties moved in tandem: the hypothesis was an understanding ready to become knowledge; induction was knowledge transformed into understanding. In the age of artificial intelligence, mysteries are solved using processes that remain unknown. This disorienting paradox makes the mysteries not only mysterious but also incomprehensible. In essence, highly sophisticated artificial intelligence contributes to the development of human knowledge, but not human understanding – a phenomenon that contradicts almost the entire post-Enlightenment modernity. At the same time, artificial intelligence combined with the human mind is proving to be a more powerful means of discovery than the human mind alone.

Thus, the essential difference between the Age of Enlightenment and the Age of Artificial Intelligence is not technological, but cognitive. After the Enlightenment, science was accompanied by philosophy. Obscuring new data and counterintuitive conclusions, doubts and uncertainties were dispelled by comprehensive explanations of human experience. Generative artificial intelligence is also poised to create a new form of human consciousness. So far, however, this possibility exists in colors for which we have no spectrum and in directions for which we have no compass. No political or philosophical framework has emerged to explain and guide this new relationship between man and machine, leaving society relatively helpless.

ChatGPT is an example of a so-called large language model that can be used to generate human text. GPT is a type of model that can automatically learn from large volumes of text without the need for human dispatch. The developers of ChatGPT have provided it with a huge amount of textual content from the digital world. Computing power allows the model to capture patterns and relationships.

The ability of large language models to generate human text was almost an accidental discovery. These models are trained to predict the next word in a sentence, which is useful in tasks such as autocompletion when sending text messages or searching the Internet. But it turns out that models also have an unexpected ability to produce very well-articulated paragraphs, articles, and eventually, possibly books.

ChatGPT goes beyond the basic large language model by using human feedback to tune the model to generate more natural spoken text, and to curb its tendency to give inappropriate responses (a major problem for large language models). ChatGPT instantly turns your representations into unique responses. As a result, the interlocutor gets the impression that artificial intelligence links stationary sets of facts into dynamic concepts.

ChatGPT’s responses, statements and observations appear without explanation of where they came from and without an identified author. On the face of it, ChatGPT has no discernible motives or biases. Its results are complex, but it works surprisingly fast: in seconds it can produce answers that coherently explain a high-level topic. They are not simply copied from the text in the computer’s memory. They are generated anew through a process that man is unable to repeat. It can include hypothetical assumptions and non-obvious psychological conclusions. It can prioritize among billions of data points to select a single set of two hundred words that are most relevant (or would appear to be most relevant to a human reader). In his own words, he makes probabilistic judgments about future outcomes by combining information from separate domains into an integrated response. He seems to accept opposing views while simultaneously presenting them as compatible. It imitates other writing without copying it. Although the model is not capable of being understood by the human mind, its results reflect the deep essence of human language.

We asked ChatGPT to explain its capabilities. Can he predict the future?

“No,” he answered. – I can only provide information and answer questions based on the data I have been trained on. I have no personal experience or consciousness, so I cannot make predictions or predictions about future events.”

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *