An interview by Nadine with Peter

What is ChatGPT?

A so-called artificial intelligence with which you can enter into conversation via chat. You can think of the chat as being similar to Signal, Threema or WhatsApp, except that with ChatGPT you chat with an artificial intelligence (AI). ChatGPT was developed by the company OpenAI, which had already caused a stir with its painting program DALL-E2. DALL-E2 is an AI that can generate images from keywords. OpenAI is actually quite open about their technology.

ChatGPT, on the other hand, is a bit different. You can ask questions and ChatGPT will provide answers. If you are friendly to the program, you can ask: ChatGPT please write me a nice letter to my mother telling about my great vacation in Spain. She will then create a pretty well written letter for you in response. You can also talk to ChatGPT in person. If you ask ChatGPT nicely how she is doing, you will get a nice answer.

So similar to Alexa?

Yep, so Alexa is nowhere near as powerful as ChatGPT. It’s quite impressive how accurate and correct her results are.

Why do you say “kindly” ask ChatGPT?

Good question, so here’s how I do it. Which perhaps also indicates a bit that you already have a kind of humanization at this point. I treat my chat partner in a friendly manner, as if I were chatting with a human being. This effect is not only found in computers, but also, for example, in kitchen machines or vacuum cleaners – ours is called Marvin. I don’t think it’s bad, but you’re right, it’s noticeable. With the painting program DALL-E2, I must say that I was not so kind in my request. Here was also just generated something and I could not enter into conversation.

What does the GPT actually stand for?

This stands for Generative Pre-trained Transformer. ChatGPT itself translates it as “generatively pre-trained transformer”.

Do we still need search engines?

The biggest use case for ChatGPT will probably be search. You don’t have to look through answers like in common search engines and choose the appropriate one, but now you get an answer from ChatGPT. If this answer is not yet appropriate, I can have it tweaked by asking ChatGPT another question. Hopefully, you will get correct results.

What do you mean by hopefully?

Well, artificial intelligence can only be as smart as what it learns from. And it is well known that not everything on the Internet is correct.

So Chat GPT looks for its answers completely from the Internet?

No, it searches for the data from what it has been taught. Behind this is a multi-stage, complex learning scheme that also involves a form of quality control by humans. Put simply, the system learns independently and the results are evaluated by humans. This assessment in turn feeds into the learning scheme.

ChatGPT has learned its knowledge from 45TB of text; that’s about 10 times the German national library. Sources of the many texts included social networks, online forums, news articles, books, and spoken texts. So there could well be some misinformation in the texts.

How will ChatGPT change the way we work?

I think the good things are that we have a new way of searching, a simpler, more natural way. You get answers relatively quickly without having to read through five pages that don’t fit and without having to click through a thousand cookie consents and ads. This is actually pleasant.

And what ChatGPT is also very good at is writing letters or texts in general. ChatGPT gives you very good answers when you ask for example: Please write me a friendly cancellation for the appointment at ….. It is really perfect for that. I think that many people will also tend to entrust important things to this system. For example, answering business correspondence. If you take that further, it may well be that AI responds to AI. Where are the boundaries here? I don’t think that’s bad per se, but humans tend to take the easy way out and that’s not always the best and smartest.

The danger here, however, is that ChatGPT will be seen as omniscient. People like to assume that machines are right or that the Internet is generally right. The necessary skepticism is often lacking here. Probably the necessary competence is also missing to be able to classify the results.

Is this the biggest danger that ChatGPT or AI poses?

The biggest danger I see right now is that Microsoft wants to incorporate ChatGPT into their OfficeSuit.


We already have the problem that students have their papers and schoolwork written by ChatGPT. I fear if it is built into OfficeSuit that business correspondence will also go through ChatGPT. The danger is that then only the AIs will talk to each other.

The second major danger is that many texts on the Internet are now already generated by AI. If you consider that artificial intelligence, with the help of information from the Internet are learned, the logical consequence is that at some point an AI is learned, with information it has generated itself.

You can look it up. It is already the case today that stock exchange messages, for example, are written automatically by so-called artificial intelligence. Funny effects are created in the process. A prominent example is when Tesla’s stock was split back in the day. One share was turned into two. This initially reduces the unit price. This is quite normal with such a grit. Everyone who had one share now gets two shares in return. In terms of price, however, one share is only worth half as much as before, but you get two shares for it. So everything is ok again. Everyone can read up on why one splits shares, or simply ask ChatGPT. laugh

Now we know that when stocks go down, news of crashing stocks appears in the press. Although it was a normal process of stock split at Tesla were the headlines: Tesla stock plummets!
An AI writes quasi news about learned, there was no editor behind it here any more. The AI read out the data and wrote an article about it that was as lurid as possible. Now, if you think about an AI looking at that exact article again and drawing conclusions or learning based on that, that’s not going to be good for us. The question is where and how AI is used.

Can I recognize messages and essays from ChatGPT?

I guess there is now a vendor that claims to be able to recognize texts written by AI. However, I have not yet dealt with it in detail. In my experience, if you ask relatively similar questions of an AI, you will see a pattern in it. This should also be the same across different requesters. If I ask ChatGPT something and it generates me an answer, e.g. a letter, and you enter the same or quite similar question, the result will also look quite similar. I think that you can recognize certain schemas from this. Now, of course, the question is to what extent OpenAI will also improve this. How is the artificial intelligence behind it changing. Does the AI learn permanently, does it learn over and over again, or does it learn through user feedback that I can give to my results in ChatGPT. It suggests that improvements are taking place here and that the AI is learning. This is also how it is stated on OpenAI’s website about ChatGPT. But in general, I think it’s relatively difficult to detect ChatGPT from text alone.


Thank you for the interview