GET THE APP

..

Telecommunications System & Management

ISSN: 2167-0919

Open Access

The Foundations of Robosociology: Values and The Aggregate Behaviors of Synthetic Intelligences

Abstract

Adam Alonzi

Outcomes on the macro level often cannot be accurately extrapolated from the microbehaviors of individual agents. The interdependence of complex system’s components makes simulation a viable option for exploring cause and effect relationships within it (Miller and Page, 2009). Chaos theory emphasizes the sensitivity of such networks to starting conditions (Boccaletti, 2000), which strongly suggests thought should be put into the architecture of an AGI “society” before it begins to take shape. Protocols for emergency interventions should certainly be in place, but the network itself should be robust enough from the beginning to handle sudden deviations from basic ethical precepts by one or more of its members.  Outside of its context, and without any information about the parts to which it is connected, a cell or leaf or animal can be studied, but not understood in a meaningful way (Mitchell, 2009). Creating moral agents in a hyperconnected world will involve modeling their interactions with entities like and unlike themselves in the face of both predictable and unforeseen events. This will be helpful as groups can behave differently than their individual parts (Schelling, 1969) Keeping AI friendly does not end with giving each AI a set of maxims before letting them loose, but satisfactorily explicating upon the emergent phenomena that arise from the interactions of similarly or differently “educated” machines. Because of the near certainty that synthetic intelligences will communicate rapidly and regularly, it is imperative that thought leaders in AI safety begin thinking about how groups of artificially intelligent agents will behave.

PDF

Share this article

arrow_upward arrow_upward