OpenAI, the creator of the world-famous conversational chatbot ChatGPT, has recently unveiled fine-tuning to its GPT-3.5 Turbo model. The purpose of releasing this feature is to invent supervised products that excel at peculiar tasks. OpenAI claims that the fine-tuned version of the GPT-3.5 can match and, in some cases, even outperform one of the most advanced natural language processing models, GPT-4.
According to the ChatGPT creator, the GPT-3.5 model can be fine-tuned using their own data. In the context of Artificial Intelligence, fine-tuning refers to the process of providing additional data to a model like GPT-3.5 Turbo to nurture a focused model capable of handling peculiar tasks.
This announcement means that customers will be able to create a bot based on GPT-3.5, trained specially to offer reliable experiences or use concise wording. With the help of fine-tuning, developers will be able to utilize this ability to craft AI assistants, documentation engines, or any custom application in your desired tone.
It is also pertinent to mention, however, that the training material will not be used to train OpenAI models. Another key point to note is that GPT-3.5 Turbo is capable of handling 4,000 singular tokens at once. The ability to handle the content of a length of more than 16000 will be introduced in the later versions.
Talking about the price range, users can fine-tune GPT-3.5 Turbo at a rate of $0.0080 per 1000 tokens for training. For 1000 tokens for input usage and the chatbot’s output, the cost is $0.0120 each. In simpler words, you will have to pay approximately $2.40 per 75000 words to fine-tune the model.
Even though GPT-3.5 Turbo isn’t the most advanced chatbot developed by OpenAI – GPT-4 holds precedence in that regard – the former’s ability to be fine-tuned is set to make it a lot more popular.