This paper focuses on test procedures under corrupted data.We assume that the observations Zi are mismeasured, due to the presence of measurement errors. Thus, instead of Zi for i = 1, . . ., n, we observe Xi = Zi + √ $\delta$Vi, with an unknown parameter $\delta$ and an unobservable random variable Vi. It is assumed that the random variables Zi are i.i.d., as are the Xi and the Vi. The test procedure aims at deciding between two simple hyptheses pertaining to the density of the variable Zi, namely f0 and g0. In this setting, the density of the Vi is supposed to be known. The procedure which we propose aggregates likelihood ratios for a collection of values of d. A new definition of least-favorable hypotheses for the aggregate family of tests is presented and a relation with the Kullback-Leibler divergence between the sets (f$\delta$)$\delta$ and (g$\delta$)$\delta$ is presented. Finite-sample lower bounds for the power of these tests are presented, both through analytical inequalities and through simulation under the least-favorable hypotheses. Since no optimality holds for the aggregation of likelihood ratio tests, a similar procedure is proposed, replacing the individual likelihood ratio by some divergence based test statistics. It is shown and discussed that the resulting aggregated test may perform better than the aggregate likelihood ratio procedure.