sounds {BayesMallows}R Documentation

Sounds data

Description

Data from an experiment in which 46 individuals compared 12 different sounds (Barrett and Crispino 2018). Each assessor was asked multiple times to compare a pair of two sounds, indicating which of the sounds sounded the most like it was human generated. The pairwise preference for each assessor are in general non-transitive. These data inspired the Mallows model for non-transitive pairwise preferences developed by (Crispino et al. 2019).

Usage

sounds

Format

An object of class data.frame with 1380 rows and 3 columns.

References

Barrett N, Crispino M (2018). “The impact of 3-D sound spatialisation on listeners' understanding of human agency in acousmatic music.” Journal of New Music Research, 47(5), 399–415. doi:10.1080/09298215.2018.1437187.

Crispino M, Arjas E, Vitelli V, Barrett N, Frigessi A (2019). “A Bayesian Mallows approach to nontransitive pair comparison data: How human are sounds?” The Annals of Applied Statistics, 13(1), 492–519. doi:10.1214/18-aoas1203.

See Also

Other datasets: beach_preferences, bernoulli_data, cluster_data, potato_true_ranking, potato_visual, potato_weighing, sushi_rankings


[Package BayesMallows version 2.2.5 Index]