Loading...
Loading...
Browse all stories on DeepNewz
VisitOpenAI Launches Fine-Tuning for GPT-4o Mini, Free for 2 Months
Jul 23, 2024, 08:47 PM
OpenAI has announced the availability of fine-tuning for its GPT-4o mini model, offering developers a more capable and cost-effective alternative to previous models like GPT-3.5 Turbo. The GPT-4o mini provides four times the training context (64k tokens) and eight times the inference context (128k tokens) compared to GPT-3.5 Turbo. For the next two months, developers can access up to 2 million training tokens per day for free, with plans to gradually expand access to all user tiers, starting with tier 4 and 5 users. This initiative is expected to unlock new use cases and enhance the effectiveness of fine-tuning through Sept 23.
View original story
Yes • 50%
No • 50%
Less than 400 million • 25%
400-499 million • 25%
500-599 million • 25%
600 million or more • 25%
0-10% • 25%
11-20% • 25%
21-30% • 25%
31% or more • 25%
Less than 300 million • 25%
300-350 million • 25%
351-400 million • 25%
More than 400 million • 25%
Less than 10 million users • 25%
10-20 million users • 25%
20-30 million users • 25%
More than 30 million users • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Less than 10% • 25%
10% to 30% • 25%
30% to 50% • 25%
More than 50% • 25%
Less than 100 • 25%
100 to 500 • 25%
501 to 1000 • 25%
More than 1000 • 25%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
Yes • 50%
No • 50%
Other • 25%
Customer service chatbots • 25%
Content generation • 25%
Personalized recommendations • 25%