Loading...
Loading...
Browse all stories on DeepNewz
VisitOpenAI Introduces Free Fine-Tuning for GPT-4o Mini Model with 2M Tokens/Day
Jul 23, 2024, 08:05 PM
OpenAI has announced the availability of fine-tuning for its GPT-4o mini model, offering developers the ability to customize the AI for specific applications. This feature is initially available to tier 4 and 5 users, with plans to expand access to all tiers. Developers can utilize up to 2 million training tokens per day for free through September 23. The GPT-4o mini model boasts 4 times the training context (64k tokens) and 8 times the inference context (128k tokens) compared to GPT-3.5 Turbo, making it a more capable and cost-effective option. Fine-tuning GPT-4o mini is also cheaper and available for a trial period.
View original story
Yes • 50%
No • 50%
Less than 300,000 • 25%
300,000 to 400,000 • 25%
400,001 to 500,000 • 25%
More than 500,000 • 25%
0-100 • 25%
101-500 • 25%
501-1000 • 25%
1001+ • 25%
Yes • 50%
No • 50%
Less than 25% • 25%
25% to 50% • 25%
51% to 75% • 25%
More than 75% • 25%
Customer service chatbots • 25%
Content generation • 25%
Personalized recommendations • 25%
Other • 25%
Language generation style • 25%
Response accuracy • 25%
User interaction behavior • 25%
Domain-specific knowledge • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%