People have always been influenced by the opinions of their acquaintances. Increasingly, through recommendations and ratings provided on all sorts of goods and services, people are also influenced by the opinions of people that are not even acquaintances. This ubiquity of the sharing of opinions has intensified the interest is the concept of herding (or informational cascades) introduced in 1992. While agents in most previous works have only individualistic goals, this talk focuses on social influence among agents in two collaborative settings.
We consider agents that perform Bayesian binary hypothesis testing and, in addition to their private signals, observe the decisions of earlier-acting agents. In the first setting, each decision has its own corresponding Bayes risk. Each agent affects the minimum possible Bayes risk for subsequent agents, so an agent may have a mixed objective including her own Bayes risk and the Bayes risks of subsequent agents; we demonstrate her tension between being informative to other agents and being right in her own decisions, and we show that she is more informative to others when she is open minded. In the second setting, opinions are aggregated by voting, and all agents aim to minimize the Bayes risk of the team's decision. We show that social learning is futile when the agents observe conditionally independent and identically distributed private signals (but not merely conditionally independent private signals) or when the agents require unanimity to make a decision. Our experiments with human subjects suggest that when opinions of people with equal qualities of information are aggregated by voting, the ballots should be secret. They have also raised questions about rationality and trust.