GTP-3 is defined by many as the largest artificial neural network that has ever been created to this date.
This breakthrough presented by artificial intelligence (AI) research and deployment company OpenAI has entailed a cost of over 4 million dollars for Elon Musk and Sam Altman. Nevertheless, the investment has proven worth such a figure, as it opens a world of possibilities for AI beyond our current imagination.
What is GPT-3?
GPT-3 stands for Generative Pre-training Transformer 3. It’s a deep learning model consisting of algorithms capable of recognising data patterns and that can also learn through examples. On this account, it’s considered to be an artificial neural network with long-term memory.
GTP-3’s uses its algorithms to generate text. These algorithms have been previously trained with a huge database.
It assesses and processes all the data it receives to fill in the information gaps.
GTP-3 has been described as the most important and useful breakthrough in artificial intelligence to have taken place in years. It seems to be — despite still being in its beta version — the most powerful AI model currently available.
It’s capable of generating whole texts by starting with just a single sentence as input and then completing the rest of the writing. To achieve this, it handles more than 175 billion parameters. This is a very relevant fact, since its previous version GPT-2, which was launched in 2019, handled only around 1.5 billion parameters. The progress attained in just a year has been amazing.
GPT-3 can translate texts into other languages and adapt them to different writing styles, such as that of a newspaper article, a fiction novel, etc. It can also write poetry or give us the best answer to any question we pose to it.
In a nutshell, GTP-3 can cope with anything that’s structured like a language: it can answer questions, write essays, summarise long texts, translate, take notes, and even write computer code.
Yes, you have read that correctly: GTP-3 can also programme. To much astonishment, it has been discovered that it’s capable of using a plug-in for Figma, a software tool that is commonly used in app and website design. This feature could have momentous implications for the way in which software is developed in the future.
The sheer amount of things it’s capable of doing may seem quite incredible, but its potential skills are even more astounding.
⭐ Keep reading | Augmented Intelligence: humans and AI joining forces
How does GPT-3 work?
To train it and achieve operational capacity, GPT-3 has been provided with information ranging from Wikipedia texts selected by OpenAI to around 750 GB of the CommonCrawl corpus. This corpus is a set of data collected by crawling the Internet that is available to the general public. A large number of computer resources and approximately 4.6 million dollars have been invested just to have GPT-3 undergo this training.
Its algorithmic structure is designed to take in a linguistic input and provide as output its best prediction of what would be the most useful message for the user regarding such an input. GPT-3 can make these predictions thanks to its exhaustive training with such a large database. This is the key aspect differentiating it from other algorithms that are not able to make such predictions.
To elaborate texts and sentences, it employs a semantic analytics approach that goes beyond the meaning of words and also takes into account how their combination with other words affects their meaning depending on the global context in which they are found.
You can find 14 GPT-3 examples of apps in the following video:
The way GPT-3 learns is known as unsupervised learning. This means that it has not been given feedback on whether its answers are correct or incorrect during its training. GPT-3 obtains all the information it needs from analysing the texts that make up its database.
When getting started with a new linguistic task, it will get it wrong millions of times at first but will eventually come up with the right word. GPT-3 will find out that its choice is the “correct” choice by verifying its original input data. When it’s certain about having found the proper output, it will assign a “weight” to the algorithm process that produced the successful result. This way, it gradually learns which processes are most likely to provide correct answers.
📚 You may also be interested in | 7 ways that data culture will affect businesses in the future
Some of the problems associated with GPT-3
Some problems that artificial intelligence specialists have warned about concern a dreadful capability of producing fake news en masse. These algorithms could produce this kind of news and overload the networks, causing widespread misinformation without us hardly realising what is happening.
You may think that you can distinguish between texts written by machines and their human-created counterparts but a study carried out by Adrian Yijie Xu has provided a surprising result:
“Only 52% of readers detect which texts have been created by GPT-3.”
Therefore, a significant part of the population would be vulnerable to these artificial fake news, believing them as true and contributing to general misinformation.
Another of the problems that this technology entails is that it’s currently a very expensive tool, as it requires a huge amount of computing power to be able to run. Therefore, its use is restricted to a very small amount of companies that can afford it.
✏️ Recommended article | 5 Ways Edge Computing Will Impact the Future of IoT
The future of GPT-3
OpenAI has not disclosed all the details of how its algorithms work, so anyone relying on GTP-3 for answers or for developing its products is sort of blindfolded and not knowing exactly how the information retrieved was obtained or if it can be truly relied on.
The system is promising but is yet away from being perfect: it can elaborate short texts or basic applications but the results it yields regarding more complex tasks are more along the lines of gibberish than of a proper useful answer.
All the same, we must say that GPT-3 — with all its limitations — has been obtaining very promising results in quite a short period of time and we hope it will be applied soon in a practical way to our everyday lives for things such as improving chatbots or as an aid to programmers, for instance.