make_learning {manynet} | R Documentation |
Making learning models on networks
Description
These functions allow learning games to be played upon networks.
-
play_learning()
plays a learning model upon a network. -
play_segregation()
plays a Schelling segregation model upon a network.
Usage
play_learning(.data, beliefs, closeness = Inf, steps, epsilon = 5e-04)
play_segregation(
.data,
attribute,
heterophily = 0,
who_moves = c("ordered", "random", "most_dissatisfied"),
choice_function = c("satisficing", "optimising", "minimising"),
steps
)
Arguments
.data |
An object of a manynet-consistent class:
|
beliefs |
A vector indicating the probabilities nodes put on some outcome being 'true'. |
closeness |
A threshold at which beliefs are too different to
influence each other. By default |
steps |
The number of steps forward in learning. By default the number of nodes in the network. |
epsilon |
The maximum difference in beliefs accepted for convergence to a consensus. |
attribute |
A string naming some nodal attribute in the network. Currently only tested for binary attributes. |
heterophily |
A score ranging between -1 and 1 as a threshold for how heterophilous nodes will accept their neighbours to be. A single proportion means this threshold is shared by all nodes, but it can also be a vector the same length of the nodes in the network for issuing different thresholds to different nodes. By default this is 0, meaning nodes will be dissatisfied if more than half of their neighbours differ on the given attribute. |
who_moves |
One of the following options: "ordered" (the default) checks each node in turn for whether they are dissatisfied and there is an available space that they can move to, "random" will check a node at random, and "most_dissatisfied" will check (one of) the most dissatisfied nodes first. |
choice_function |
One of the following options: "satisficing" (the default) will move the node to any coordinates that satisfy their heterophily threshold, "optimising" will move the node to coordinates that are most homophilous, and "minimising" distance will move the node to the next nearest unoccupied coordinates. |
Learning models
The default is a Degroot learning model,
but if closeness
is defined as anything less than infinity,
this becomes a Deffuant model.
A Deffuant model is similar to a Degroot model, however nodes only learn
from other nodes whose beliefs are not too dissimilar from their own.
References
DeGroot, Morris H. 1974. "Reaching a consensus", Journal of the American Statistical Association, 69(345): 118–21. doi:10.1080/01621459.1974.10480137
Deffuant, Guillaume, David Neau, Frederic Amblard, and Gérard Weisbuch. 2000. "Mixing beliefs among interacting agents", Advances in Complex Systems, 3(1): 87-98. doi:10.1142/S0219525900000078
Golub, Benjamin, and Matthew O. Jackson 2010. "Naive learning in social networks and the wisdom of crowds", American Economic Journal, 2(1): 112-49. doi:10.1257/mic.2.1.112
See Also
Other makes:
make_cran
,
make_create
,
make_ego
,
make_explicit
,
make_motifs
,
make_play
,
make_random
,
make_read
,
make_stochastic
,
make_write
Other models:
make_play
Examples
play_learning(ison_networkers,
rbinom(net_nodes(ison_networkers),1,prob = 0.25))
startValues <- rbinom(100,1,prob = 0.5)
startValues[sample(seq_len(100), round(100*0.2))] <- NA
latticeEg <- create_lattice(100)
latticeEg <- add_node_attribute(latticeEg, "startValues", startValues)
latticeEg
play_segregation(latticeEg, "startValues", 0.5)
# graphr(latticeEg, node_color = "startValues", node_size = 5) +
# graphr(play_segregation(latticeEg, "startValues", 0.2),
# node_color = "startValues", node_size = 5)