For high-dimensional data, selection of predictors for regression is a challenging problem. Methods such as sure screening, forward selection, or penalization are commonly used. Instead, Bayesian variable selection methods place prior distributions over model space, along with priors on the parameters, or equivalently, a mixture prior with mass at zero for the parameters in the full model. Since exhaustive enumeration is not feasible, posterior model probabilities are often obtained via long MCMC runs. The chosen model can depend heavily on various choices for priors and also posterior thresholds. Alternatively, we propose a conjugate prior only on the full model parameters and to use sparse solutions within posterior credible regions to perform selection. These posterior credible regions often have closed form representations, and it is shown that these sparse solutions can be computed via existing algorithms. The approach is shown to outperform common methods in the high-dimensional setting, particularly under correlation. By searching for a sparse solution within a joint credible region, consistent model selection is established. Furthermore, it is shown that the simple use of marginal credible intervals can give consistent selection up to the case where the dimension grows exponentially in the sample size.