The internet is full of exciting posts about how language assistants like ChatGPT will change everything. This starts with the claim that developers will allegedly no longer be needed and ends with the idea that artificial intelligence (AI) may soon wipe us out. In this article, we want to clarify how these language assistants work and what we can realistically expect or fear from them.
AI systems like ChatGPT are in the spotlight in early 2023 and are present in the media. Even the US satirical magazine The Onion considers the topic to be relevant enough to joke about. Mothers recommend such assistance systems to their daughters developing software. But one after the other. What does such a system actually do?
ChatGPT is a language assistant that responds to requests in human written languages. A good example can be seen in Figure 1. We ask about the Lendbreen glacier in Norway and ChatGPT answers in a short text.
Fig. 1: ChatGPT answers the question “What is the significance of the Lendbreen glacier in Norway?”
As in a good conversation, ChatGPT keeps it short – further questions are possible. We can refer to what has already been asked as well as the answers. In Figure 2, we continue asking about the attraction for tourists and only refer to the glacier as “glacier.” Like in a conversation with a human partner, ChatGPT understands that we are likely referring to this specific glacier, which we were just talking about.
Fig. 2: We can continue the conversation and ask more questions. ChatGPT still has the context of what has been discussed.
ChatGPT, the Google Killer
ChatGPT is sometimes referred to as an alternative to Google search, but ChatGPT is not really a search engine. It does not search for possible sources on the internet but instead generates all answers from itself.
As usual, ChatGPT is capable of providing information about the difference between a search engine and a language assistant, as seen in Figure 3. Such a language assistant actually appears more natural than a search engine. This tweet reports a good example of an older lady who uses a search engine more like a language assistant.
Fig. 3: ChatGPT can also provide information about itself and compare itself with other approaches
Language assistants like ChatGPT are sometimes derogatorily referred to as stochastic parrots because of the assumption that they only repeat things. Their answers are not based on a thinking process in the human sense. Instead, probabilities for the most suitable next word are calculated using a complex neural network and then given as part of the answer. It’s almost like a game with the cell phone keyboard: You start a sentence and always take the middle, most likely word from the automatic suggestion list. Rarely something meaningful, but usually something absurdly funny comes out.
That’s not the case with ChatGPT. The generated results are usually of impressive quality and previously asked questions and answers are included in the calculation. This quality is due to the complexity and sheer size of the system. Earlier, smaller systems of similar design produced much less impressive results. It seems that size actually does matter in this case.
What can be done with ChatGPT?
ChatGPT is capable of general language tasks. What exactly is to be done is described in the request itself. This distinguishes ChatGPT from systems like DeepL, which only allow for a specific task, in this case, translation.
It is also possible to refer to texts on the internet through links. This can be used to summarize texts or ask specific questions about them. A request like
Can you please translate and summarize this text for me based on the most important facts: https://de.wikipedia.org/wiki/Angela_Merkel
yields good results as can be seen in Figure 4. Further questions about the text or the summary are also possible.
Fig. 4: ChatGPT can perform complex tasks on links given
ChatGPT can also help with studying topics. For a given link, you can generate the ten most important questions and their matching answers. This not only makes the heart of the student, but also the heart of the teacher beat faster.
Conversations on many topics are also possible with ChatGPT. Helena Sarin, one of the most well-known artists in the AI world, describes a conversation with ChatGPT as intelligent and refreshing. You can ask any silly question and not have to worry about your own reputation or fear being laughed at.
However, there are currently limitations when operating those systems from within the EU: For the foreseeable future systems like these will not run on-site, but only in data centers. And these are currently in the USA, as the technology originates there. This means that you cannot simply upload or link confidential data to such systems from outside the USA.
In addition, the consideration of context is limited. Long texts or many documents cannot be summarized meaningfully or even interrogated. Speculations about an imminent new dimension of capacity for these models are circulating, but unfortunately not credible.
In combination, these two limitations mean that many useful and exciting application cases must be postponed for the time being. This includes deployments in a legal context (both limitations apply) or the evaluation, interrogation, and summarization of scientific articles (at least the lack of capacity is limiting).
So, have we reached Artificial General Intelligence (AGI)?
So far, the Turing test has been considered the criterion for an intelligent system. Simply put, whether you notice in a chat that a machine is your conversation partner or not. This is illustrated in Figure 5.
If this definition is taken as a basis, it can actually be argued that a system like ChatGPT fulfills this in many cases. Experiments even say that its IQ is only slightly below the human average. To what extent this speaks for the intelligence of ChatGPT or against the IQ test is left to be seen.
Fig. 5: Can a machine fool us into believing it to be a human?
However, there are increasing doubts about the relevance of the Turing test. A typical criticism sounds like “wake me up when all these AGI systems demonstrate critical thinking and curiosity”. There is a lack of these qualities and generally also a lack of motivation to do or to question anything. Where these qualities should come from are unresolved questions.
But often such a question is aimed in a completely different direction: Will such systems wipe out humanity or at least take over our jobs? In fact, trust me, it is not foreseeable that such a system could endanger our existence as humanity. However, the use of such a system makes it clearer what genuine human abilities are and what machines could also do: An AI system can make suggestions, but the decision lies in human hands.
Even the work of artists and writers may often consist of choosing from options. Whether these suggestions were made by humans or machines is secondary. William S. Burroughs describes his work mainly as a selection: “Out of hundreds of possible sentences that I might have used, I chose one”.
This also applies to software development. It is possible to suggest the next line of a program or even to generate entire program parts based on a description. But what should be programmed and why a particular suggestion was accepted remains the responsibility of the programmers. An AI can support, but not replace humans.
Is it all just hype?
This leads us to the central question of the article: are we dealing with a change that is transforming the internet or is it all just hype?
Yann LeCun, a star of the AI scene, is unfazed and says that it’s all nothing new, just well executed. And that’s exactly the point. Companies like Facebook, Amazon, Apple, Microsoft, and Google have failed so far to turn their existing research into a product that would be available to a wider audience. Although this is a significant accomplishment, it is both financially and technically feasible for these companies. However, they are starting just now.
Since more powerful systems are waiting in the labs of these large companies, we can expect innovations and advancements in 2023 that will also be available to most users of the. Google is bringing the founders back from retirement to counter the perceived threat from ChatGPT and has announced its competitor. The giant Microsoft is entering into a strategic partnership with the dwarf OpenAI to offer its systems on Microsoft’s Azure cloud platform. This seems to be worth a double-digit billion dollars to Microsoft. Perhaps we will soon see such systems in European data centers as well.
So, it is all just exciting and great?
The creation of such language assistants like ChatGPT currently requires a massive amount of text, which can currently only be provided by the internet. Thus, such a system also inherits the weaknesses of the internet: it reproduces what is on the internet. But not everything on the internet is accurate or presentable. And so not every answer from the system is correct. Worse yet, it has no idea if it is talking nonsense. Even with simple arithmetic and logical connections, the system often fails.
And just when a system can fool us about whether we are dealing with a human or a machine, you would like to be informed about who you are communicating with. Such checks are offered with different limitations. OpenAI itself offers a software that uses a given text to determine whether it likely came from a human or was generated by an AI system. Such a system can also be used to check the source of a text when, e.g., students submit a term paper, etc.
Conclusion
2023 is the year of large language models (LLMs) like ChatGPT. New forms of communication with computers and the internet are becoming increasingly apparent. This is not limited to the use by particularly tech-oriented people, but is available to any internet user. There are already instructions on how to use such systems efficiently, similar to “how to google” tutorials from the 2000s.
In addition to OpenAI – the maker of ChatGPT – Microsoft, Google, Amazon, and Apple will launch similar systems or at least partner in 2023 and drive competition and development. Therefore, it is foreseeable that existing limitations such as lack of data protection and limited context will improve over the course of the year. What is exactly to be expected in this area by the end of 2023 is uncertain for anyone.
Oliver Zeigermann is the head of artificial intelligence at the German consulting company OPEN KNOWLEDGE (https://www.openknowledge.de/). He has been developing software with different approaches and programming languages for more than 3 decades. In the past decade, he has been focusing on Machine Learning and its interactions with humans.