Yuzbasi, BahadirArashi, MohammadAkdeniz, Fikri2024-08-042024-08-0420211432-76431433-7479https://doi.org/10.1007/s00500-021-05763-9https://hdl.handle.net/11616/99985This article is concerned with the bridge regression, which is a special family in penalized regression with penalty function Sigma(p)(j=1) |beta(j) (q) with q > 0, in a linear model with linear restrictions. The proposed restricted bridge (RBRIDGE) estimator simultaneously estimates parameters and selects important variables when a piece of prior information about parameters are available in either low-dimensional or high-dimensional case. Using local quadratic approximation, we approximate the penalty term around a local initial values vector. The RBRIDGE estimator enjoys a closed-form expression that can be solved when q > 0. Special cases of our proposal are the restricted LASSO (q = 1), restricted RIDGE (q = 2), and restricted Elastic Net (1 < q < 2) estimators. We provide some theoretical properties of the RBRIDGE estimator for the low-dimensional case, whereas the computational aspects are given for both low- and high-dimensional cases. An extensive Monte Carlo simulation study is conducted based on different prior pieces of information. The performance of the RBRIDGE estimator is compared with some competitive penalty estimators and the ORACLE. We also consider four real-data examples analysis for comparison sake. The numerical results show that the suggested RBRIDGE estimator outperforms outstandingly when the prior is true or near exact.eninfo:eu-repo/semantics/openAccessBridge regressionRestricted estimationMachine learningQuadratic approximationNewton-RaphsonVariable selectionMulticollinearityPenalized regression via the restricted bridge estimatorArticle25138401841610.1007/s00500-021-05763-92-s2.0-85107873517Q2WOS:000641235700004Q2