At OpenAI, we continuously enhance our offerings, notably our fine-tuning API. Since its launch, the self-serve API for GPT-3.5 has been embraced globally. Empowering models to delve deeper into content, our API enables developers to fine-tune with larger volumes of examples, leading to higher-quality outcomes while reducing costs and latency.
A prime example of fine-tuning’s benefits is Indeed, a global job matching platform. By fine-tuning GPT-3.5 Turbo, Indeed improved job recommendations’ quality and accuracy, scaling messaging from under one million to 20 million per month.
Expanding our API, we introduce features empowering developers:
- Epoch-based Checkpoint Creation : Automatically generating full model checkpoints during training epochs, reducing retraining needs, especially in overfitting cases.
- Comparative Playground : A user-friendly UI comparing model quality and performance, facilitating human evaluation across models.
- Third-party Integration : Seamless integrations with platforms like Weights and Biases for detailed data sharing.
- Comprehensive Validation Metrics : Compute metrics such as loss and accuracy over the entire validation dataset for deeper insights.
- Hyperparameter Configuration : Configure hyperparameters directly from the dashboard.
- Fine-Tuning Dashboard Improvements : Enhancements including configuring hyperparameters, detailed training metrics view, and job rerun capabilities.
These enhancements provide developers with greater control and transparency, enhancing the user experience and model quality.
Expanding our Custom Models Program, we introduce assisted fine-tuning, a collaboration leveraging advanced techniques. Particularly beneficial for organizations needing support in efficient training pipelines and maximizing performance for specific tasks.
For instance, SK Telecom partnered with us to fine-tune GPT-4 for telecom-related conversations in Korean. Through assisted fine-tuning, SK Telecom achieved significant improvements in conversation summarization and customer satisfaction.
For organizations with specific use cases or extensive data, we offer fully custom-trained models. Harvey, an AI-native legal tool, collaborated with OpenAI to develop a custom-trained language model for case law, achieving remarkable improvements.
Looking ahead, custom models tailored to industries and use cases will grow in prevalence. With various techniques available, organizations can develop personalized models for impactful AI implementations.
Whether fine-tuning with our API or exploring custom solutions, we’re dedicated to supporting your journey towards AI excellence. Visit our API documentation to start or reach out to us for customizing models to your requirements