The rapid evolution of 5G networks necessitates efficient and adaptive resource allocation strategies to enhance network performance, minimize latency, and optimize bandwidth utilization. This study systematically evaluates multiple machine learning (ML) models, including Neural Networks, Support Vector Machines (SVM), Decision Trees, Ensemble Learning, and Regression-based approaches, to
... [Show full abstract] determine the most effective techniques for 5G resource allocation. The classification-based models demonstrated superior performance in predicting network congestion states, with Boosted Trees achieving the highest accuracy (94.1%), outperforming Bagged Trees (92.7%) and RUS Boosted Trees (93.8%). Among SVM classifiers, Gaussian SVM exhibited the highest accuracy (92.3%), highlighting its robustness in handling non-linearly separable data. Levenberg-Marquardt-trained Neural Networks (93.4%) outperformed SVM models in overall accuracy, emphasizing deep learning’s effectiveness in hierarchical feature representation. Meanwhile, regression-based models, particularly Gradient Boosting (R² = 0.96, MSE = 4.92), demonstrated the best predictive performance for continuous resource allocation optimization, surpassing Random Forest (R² = 0.94, MSE = 6.85) and Polynomial Regression (R² = 0.92, MSE = 9.21). The integration of Self-Organizing Maps (SOMs) for unsupervised network clustering further improved resource segmentation. Future research should explore Deep Reinforcement Learning (DRL) for autonomous 5G optimization and Explainable AI (XAI) techniques for improved interpretability in real-world deployments.