The dynamical simulation of passive fiber ring resonators typically relies on the traditional Lugiato-Lefever equation (LLE) numerical method, which incurs high computational costs due to its iterative nature. This inefficiency poses a critical bottleneck for predicting optical field states within resonators. To address this challenge, this study proposes a hybrid machine learning model that integrates gradient boosting decision trees (GBDT), support vector machines (SVM), and artificial neural networks (ANN) through a dynamic voting mechanism. The primary objective is to achieve rapid and accurate classification of optical field states in resonators, thereby replacing traditional numerical simulations and advancing intelligent development in nonlinear photonics.
The training dataset was generated by numerically solving the LLE under varying parameters, including fiber length (10 m~100 m), power transmission coefficient (0.05~0.15), and detuning (-π ~ π). The dataset encompassed five optical field states: noise, Turing, chaotic, single soliton, and multiple soliton states (Figure 2). Three machine learning models—GBDT, SVM, and ANN—were independently trained on this dataset. A dynamic weighted voting strategy was applied to synthesize their predictions: the final output was determined either by majority voting or by selecting the optimal result from the model with the highest historical accuracy for a specific class, combined with a weighting mechanism based on class-specific accuracy. Hyperparameter optimization using the Optuna framework ensured optimal performance, with key parameters including GBDT’s learning rate (0.01) and maximum tree depth (5), SVM’s penalty factor (C=90), radial basis function kernel, and kernel parameter ( \gamma =35), and ANN’s architecture (three hidden layers with 60 neurons each).
Results demonstrated classification accuracies of 77% (noise), 95% (Turing), 97% (chaotic), 86% (single soliton), and 85% (multiple soliton) for the hybrid model (Figure 6). Compared to random forest (RF), the hybrid model improved accuracy by over 12% for single- and multiple soliton states. Confusion matrices (Figure 5) revealed that GBDT excelled in single-soliton classification, SVM outperformed in multiple soliton identification, and ANN achieved superior noise-state recognition. The dynamic voting mechanism effectively balanced model strengths and mitigated individual weaknesses, reducing misclassification and enhancing robustness. Parameter space testing (Figure 7) confirmed strong consistency between predicted and actual state distributions. In computational efficiency tests (Figure 8), the hybrid model processed 100 parameter sets in 0.24 s —over 10,000 times faster than the traditional LLE method (2700 s).
By combining GBDT, SVM, and ANN with dynamic voting, this hybrid machine learning framework successfully resolves the efficiency limitations of LLE-based methods and improves upon single-model constraints, achieving unprecedented computational speed. The study highlights the potential of intelligent algorithms to replace complex numerical simulations in nonlinear photonics.