Loading...
Loading...
Browse all stories on DeepNewz
VisitOpenAI Launches Fine-Tuning for GPT-4o Mini, Free for 2 Months
Jul 23, 2024, 08:47 PM
OpenAI has announced the availability of fine-tuning for its GPT-4o mini model, offering developers a more capable and cost-effective alternative to previous models like GPT-3.5 Turbo. The GPT-4o mini provides four times the training context (64k tokens) and eight times the inference context (128k tokens) compared to GPT-3.5 Turbo. For the next two months, developers can access up to 2 million training tokens per day for free, with plans to gradually expand access to all user tiers, starting with tier 4 and 5 users. This initiative is expected to unlock new use cases and enhance the effectiveness of fine-tuning through Sept 23.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
Yes • 50%
No • 50%
Less than 10% • 25%
More than 50% • 25%
25% to 50% • 25%
10% to 25% • 25%
Other • 25%
Customer service chatbots • 25%
Content generation • 25%
Personalized recommendations • 25%