What is Chat GPT and Where Did it Come From?

Written by 
Abhishek Malhotra
February 19, 2023
Generative AI - Chat GPT
Other categories:
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

What is Chat GPT and where did it come from?

Generative Pre-trained Transformer (GPT) is a state-of-the-art language processing model developed by OpenAI (recently going into partnership with Microsoft). GPT models are trained on a massive amount of text data, allowing them to generate human-like text with a high degree of coherence and fluency.

The Evolution of GPT, now widely known as Chat GPT.

GPT (or GPT-1) was first introduced in 2018. It implemented unsupervised learning which served as a pre-training objective for supervised fine-tuned models and that is how it derived the name of Generative Pre-Training. Prior to this most Natural Language Processing (NLP) models were trained for specific tasks and needed large amounts of annotated data for learning and it was not possible to generalise them for other tasks.

However, GPT demonstrates how a generative model of language is able to acquire world knowledge and process long-range dependencies by pre-training on a diverse corpus with long stretches of contiguous text. GPT-1 used around 7000 unpublished books as part of its training data set and had 117 million trainable parameters in its implementation.

GPT-2 was first announced in February 2019. The successor of GPT-1 was released in November 2019 which was capable of translating text, answer questions, summarise passages and generate text output that was indistinguishable from that written by humans. GPT-2 was trained on a dataset of 8 million web pages which included upvoted articles scrapped from the reddit platform. In addition, GPT-2 used 1.5 billion trainable parameters which were more than 10x than those used in GPT-1.

GPT-3 or the third-generation language prediction model in the GPT-n series was introduced in May 2020 uses deep learning to produce human-like text. GPT-3 is capable of writing articles and can also undertake tasks for which it was never trained upon, these include writing code, database queries, arithmetic calculations based on the natural language description of the tasks. GPT-3 has 175 billion parameters (100x more than GPT-2), making it one of the largest models in the field of natural language processing and uses large dataset for training which includes common crawl consisting of 410 billion tokens, 19 billion tokens from WebText2, 12 billion tokens from Books1 , 55 billion tokens from Books2 and 3 billion tokens from Wikipedia. An updated GPT-3.5 was made available in March 2022 as a newer version of GPT-3. It updated capabilities of trained models for GPT-3 and were described as more capable than previous versions.

GPT-4 is expected to be launched in 2023 and is expected to be larger and significantly more powerful than GPT-3.

ChatGPT, likely the platform as you know it - launched in November 2022. ChatGPT is a fine tuned large language model built on top of GPT-3.5 and uses conversational or chat driven approach to provide responses. Its capabilities are not limited to just chat and is a lot more versatile and has ability to write and debug computer programs, compose music, write essays etc.

ChatGPT Plus - A premium ChatGPT service based on a monthly subscription of USD $20 was launched for US customers in February 2023. There is a plan to launch a professional service subscription at some point in the future and the free ChatGPT tier will only be available when the demand is low.

In summary, GPT models are trained using a technique called unsupervised learning, which means they are not given any specific task to perform. Instead, they learn patterns in the data they are trained on, and can then be fine-tuned to perform specific tasks, such as language translation or text summarisation.

GPT models are also capable of performing a variety of language tasks, including text generation, text completion, text classification, and question answering. They can also generate a wide range of text styles, from news articles to poetry.

In addition to their ability to generate human-like text, GPT models have been used in a variety of applications, such as chatbots, automated writing assistants, and content creation. They have also shown to be useful in the field of information retrieval and content retrieval.

However, GPT models have raised concerns about their ability to perpetuate harmful biases and misinformation, which highlights the importance of developing responsible usage guidelines and ethical considerations.

Overall, GPT models have shown remarkable capabilities in the field of natural language processing, but it is important for researchers and practitioners to consider the ethical implications of their use.

Related Articles

Human Head Sculpture

The Dual Benefits of Conversational AI: Improving Both Customer Experience and Employee Engagement 

As of late, industry news has been set alight by conversation centred around Artificial Intelligence, and in particular Generative AI.
Human and Robotic Wooden Figure Points Towards The Sky With A Shadow Cast on A Blue Background

Human + Machine: What Does the Future Look Like?

Human and Machine. In so many ways, we're already harmoniously working together. So what does the future look like?
GPT-4

The Future is Here: A Sneak Peek into the Capabilities and Potential Impact of GPT-4

GPT-4 represents a significant advancement in AI technology, with the potential to transform entire industries.