Loading...
Loading...
Browse all stories on DeepNewz
VisitOpenAI Introduces Free Fine-Tuning for GPT-4o Mini Model with 2M Tokens/Day
Jul 23, 2024, 08:05 PM
OpenAI has announced the availability of fine-tuning for its GPT-4o mini model, offering developers the ability to customize the AI for specific applications. This feature is initially available to tier 4 and 5 users, with plans to expand access to all tiers. Developers can utilize up to 2 million training tokens per day for free through September 23. The GPT-4o mini model boasts 4 times the training context (64k tokens) and 8 times the inference context (128k tokens) compared to GPT-3.5 Turbo, making it a more capable and cost-effective option. Fine-tuning GPT-4o mini is also cheaper and available for a trial period.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
0-2 • 25%
3-5 • 25%
6-8 • 25%
9 or more • 25%
Yes • 50%
No • 50%
Language generation style • 25%
Response accuracy • 25%
User interaction behavior • 25%
Domain-specific knowledge • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%