Evaluation Overview
The Evaluation section offers key insights into the performance of fine-tuned AI models on the platform. This section helps users visualize and analyze the results of their fine-tuning tasks, allowing for more informed decision-making in model development.
Key Features:
-
Fine-Tuning Evaluation Graph:
- The graph shows the progress of the fine-tuning process over time, with metrics such as:
- Green Line: Represents successful evaluations or models that have been fine-tuned successfully.
- Red Line: Tracks errors or failed evaluations during the fine-tuning process.
- X-Axis: Displays time or the number of evaluations.
- Y-Axis: Displays the count of successful or unsuccessful evaluations.
- The graph shows the progress of the fine-tuning process over time, with metrics such as:
-
Model Status Table:
- Below the graph, a table displays detailed information about each fine-tuning task, including:
- Model: Name of the fine-tuned model.
- Created At: Timestamp when the fine-tuning process began.
- Finished At: Timestamp when the process completed.
- Fine-Tuned Model: The name of the resulting model.
- Status: Indicates success or failure of the fine-tuning task.
- Error: If applicable, displays the error details for any failed task.
- Below the graph, a table displays detailed information about each fine-tuning task, including:
Tips for Using the Evaluation Section:
-
Track Fine-Tuning Progress:
- Use the green line to monitor successful evaluations and ensure your models are fine-tuning as expected.
- Compare it against the red line to detect any error trends that need attention.
-
Analyze Key Metrics:
- Visualize success rates and errors over time to gain insight into how well your models are performing during the fine-tuning process.
- Adjust fine-tuning parameters based on these insights to improve future results.
-
Review Fine-Tuning Details:
- The status table helps you identify the start and end times for each task, while also allowing you to investigate any failed attempts by reviewing error messages.
Example Screenshot:
The example screenshot shows the Fine-Tuning Evaluation Graph, where a rising green line indicates successful model evaluations, and the red line indicates errors. Below the graph, the status table displays key details about the models and their fine-tuning process.
By using the Evaluation page, users can track model fine-tuning performance in real-time and adjust their approach based on the trends and data provided. This ensures more efficient model development and optimization for future tasks.