According to the AI development platform HumanLoop blog post, OpenAI CEO Sam Altman said in a closed-door seminar that OpenAI is currently severely limited by the GPU, causing them to postpone many short-term plans. Most of the problems with ChatGPT reliability and speed are Caused by shortage of GPU resources. Sam Altman also shared OpenAI's recent roadmap: GPT-4 API costs will be reduced in 2023; a longer ChatGPT context window (up to 1 million tokens), and there will be an API version that remembers conversation history in the future; GPT -4's multimodal capabilities won't be publicly available until 2024, and the visual version of GPT-4 cannot be extended to everyone until more GPU resources are available. Also, OpenAI is considering open-sourcing GPT-3, part of the reason they haven't open-sourced it is because they feel that there are not many people and companies capable of properly managing such a large language model. Many recent articles claim that "the era of giant AI models is over" is not correct. OpenAI's internal data show that the law of proportionality between scale and performance still holds true, and OpenAI's model size may double or triple every year (multiple information shows GPT-4 parameter scale 1 trillion), rather than increasing by many orders of magnitude.