The problem of equating a new standardized test to an old reference test is considered when the samples for equating are not randomly selected from the target population of test takers. Two problems with equating from biased samples are distinguished: (a) bias in the equating function arising from nonrandom selection of the equating sample, and (b) excessive variance in the equating function at scores that are relatively underrepresented in the equating sample relative to the target population. A theorem is presented that suggests that bias may not be a major problem for equating, even when the marginal distributions of scores are distorted by selection. Empirical analysis of data for equating the Armed Services Vocational Aptitude Battery (ASVAB) based on samples of recruits and applicants supports this contention. Analysis of ASVAB data also indicates that excessive variance in the equating function is a more serious issue. Variance-reducing methods, which smooth the test score distributions using extended beta binomial and loglinear polynomial models before equating by the equipercentile method, are presented. Empirical evidence suggests that these smoothing models are successful and yield equating functions that improve on both equipercentile and linear equating of the raw scores.