Loading...
Loading...
Browse all stories on DeepNewz
VisitOpenAI Introduces Free Fine-Tuning for GPT-4o Mini Model with 2M Tokens/Day
Jul 23, 2024, 08:05 PM
OpenAI has announced the availability of fine-tuning for its GPT-4o mini model, offering developers the ability to customize the AI for specific applications. This feature is initially available to tier 4 and 5 users, with plans to expand access to all tiers. Developers can utilize up to 2 million training tokens per day for free through September 23. The GPT-4o mini model boasts 4 times the training context (64k tokens) and 8 times the inference context (128k tokens) compared to GPT-3.5 Turbo, making it a more capable and cost-effective option. Fine-tuning GPT-4o mini is also cheaper and available for a trial period.
View original story
North America • 25%
Europe • 25%
Asia-Pacific • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia-Pacific • 25%
Other regions • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
Tier 4 • 25%
Tier 5 • 25%
Tier 3 • 25%
Other Tiers • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
Language generation style • 25%
Response accuracy • 25%
User interaction behavior • 25%
Domain-specific knowledge • 25%