OpenAI is upgrading its text-generating models and lowering prices in response to growing competition in the generative AI market.
OpenAI today announced the availability of GPT-3.5-turbo and GPT-4, its most recent text-generating AI with function-calling capabilities. In a blog post, OpenAI explains that developers can use function calling to describe programming functions to GPT-3.5-turbo and GPT-4 and have the models write code to carry them out.
Chatbots that answer questions by calling external tools, convert natural language into database queries, and extract structured data from the text can all benefit from the use of function calling. OpenAI writes, “These models have been fine-tuned to both detect when a function must be called… and respond with JSON that follows the function signature.” Function calling enables developers to obtain structured data from the model with greater dependability.
OpenAI is introducing a variant of GPT-3.5-turbo with a significantly expanded context window in addition to the function calling. The text that the model considers prior to generating any additional text is referred to as the context window, which is measured in tokens, or raw bits of text. Models with limited context windows have a propensity to “forget” the content of even the most recent conversations, causing them to diverge from the subject matter, frequently in undesirable ways.
The new GPT-3.5-super offers multiple times the setting length (16,000 badges) of the vanilla GPT-3.5-super at two times the cost — $0.003 per 1,000 information tokens (for example tokens taken care of into the model) and $0.004 per 1,000 result tokens (tokens the model produces). OpenAI claims to be able to process around 20 pages of text at once, which is less than the hundreds of pages that the flagship model of AI startup Anthropic can process. Version GPT-4 with a limited release is tested by OpenAI with a 32,000-token context window.)
Positively, OpenAI claims that the original GPT-3.5-turbo version, not the one with the expanded context window, will see a 25% price reduction. Now Inventors can use this model at a $0.0015 cost per 1,000 input tokens & $0.002 per 1,000 output tokens, or per dollar on 700 pages.
One of OpenAI’s most well-liked text embedding models, text-embedding-ada-002, is also getting cheaper. Text embeddings are regularly utilized for both searches (where results are ranked by importance to an inquiry string) and suggestions (where things with related text strings are suggested). Text strings’ integration is calculated.
The price of text-embedding-ada-002 has dropped by 75% from its previous level to $0.0001 per 1,000 tokens. Since the startup spends hundreds of millions of dollars on R&D and infrastructure, OpenAI claims that increased efficiency in its systems made the decrease possible.
Following the release of GPT-4 at the beginning of March, OpenAI has indicated that the company’s focus is on incremental updates to existing models rather than massive new models built from scratch. CEO Sam Altman reaffirmed that OpenAI has not begun training the successor to GPT-4 at a recent conference hosted by Economic Times, indicating that the company “has a lot of work to do” before starting that model.