Recently, OpenAI launched ChatGPT. It quickly gained many users, even faster than Google did in its early days. Some people believe there will be a big competition between Google and ChatGPT. However, right now, there are concerns about GPU usage and daily query limits.
To address challenges, OpenAI is contemplating the idea of developing its own chips. This initiative can lead to a substantial decrease in the operational costs of running ChatGPT. Apple Police highlighted that currently, the cost to run a single query on ChatGPT is 4 cents due to the dependence on GPUs. These GPUs play a crucial role in powering and supporting OpenAI’s models. They operate on vast cloud clusters to handle customer demands. However, they’re not light on the pocket.
Impressively, the service amassed 100 million users in just its initial two months, equating to millions of daily queries. Stacy Rasgon, an analyst from Bernstein, provided an insightful analysis. She pointed out that if ChatGPT’s query volume climbs to even a tenth of Google’s, OpenAI would initially require a staggering $48.1 billion in GPUs. Additionally, the annual chip expenses to maintain this would be around $16 billion.
OpenAI hasn’t made a final decision about making its own chips, according to Reuters. If they do go ahead with it, it might take a long time before we see their chips in action.
If OpenAI decides to create a cheap custom chip, it won’t be easy. This kind of project can take many years and cost a lot every year. We’re not sure if OpenAI’s supporters, like Microsoft, are ready to take on this big challenge.
Here’s another tech tidbit: Twitter May Soon Introduce New Premium Tiers