OpenAI Launches GPT-4, Successor To GPT 3, The Engine Behind ChatGPT

OpenAI has announced the launch of GPT-4, the successor to the artificial intelligence powered chatbot ChatGPT.

GPT-4 is OpenAI's the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. 

OpenAI spent 6 months aligning GPT-4 using lessons from their testing program as well as ChatGPT, resulting in best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

According to OpenAI, in a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.

GPT-4 can accept a prompt of text and images, which—parallel to the text-only setting—lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images. Over a range of domains—including documents with text and photographs, diagrams, or screenshots—GPT-4 exhibits similar capabilities as it does on text-only inputs. Furthermore, it can be augmented with test-time techniques that were developed for text-only language models, including few-shot and chain-of-thought prompting. Image inputs are still a research preview and not publicly available.

Limitations

GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021), and does not learn from its experience. It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.

GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake.The model can also have various biases in its outputs.

OpenAI states that GPT-4 and successor models have the potential to significantly influence society in both beneficial and harmful ways. 

Availability

ChatGPT Plus subscribers will get GPT-4 access on chat.openai.com with a usage cap. OpenAI will adjust the exact usage cap depending on demand and system performance in practice, but expects to be severely capacity constrained (though we will scale up and optimize over upcoming months).

Depending on the traffic patterns they see, they may introduce a new subscription level for higher-volume GPT-4 usage; they also hope at some point to offer some amount of free GPT-4 queries so those without a subscription can try it too.

For developers, to get access to the GPT-4 API you have to sign up for the waitlist. OpenAI will start inviting some developers at launch, and scale up gradually to balance capacity with demand. If you are a researcher studying the societal impact of AI or AI alignment issues, you can also apply for subsidized access via the Researcher Access Program.

Once you have access, you can make text-only requests to the gpt-4 model (image inputs are still in limited alpha), which OpenAI will automatically update to the recommended stable model as they make new versions over time . Pricing is $0.03 per 1k prompt tokens and $0.06 per 1k completion tokens. Default rate limits are 40k tokens per minute and 200 requests per minute.

gpt-4 has a context length of 8,192 tokens. OpenAI is also providing limited access to its 32,768–context version, gpt-4-32k, which will also be updated automatically over time (current version gpt-4-32k-0314, also supported until June 14). Pricing is $0.06 per 1K prompt tokens and $0.12 per 1k completion tokens. 

Previous Post Next Post

AD

AD