Minding Norms (Hunting for Norms)

Minding Norms (Hunting for Norms) preview image

1 collaborator

Jbradford_web3 John Bradford (Author)

Tags

norms, society, sociology 

Tagged by John Bradford almost 8 years ago

social norms 

Tagged by John Bradford almost 8 years ago

Visible to everyone | Changeable by everyone
Model was written in NetLogo 6.0.2 • Viewed 618 times • Downloaded 56 times • Run 0 times
Download the 'Minding Norms (Hunting for Norms)' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


WHAT IS IT?

This simulation compares how different types of agents (social conformers and norm detectors) converge on a particular action while interacting across multiple settings. Social conformers adopt the most popular action in a given situation. Norm detectors both observe the actions of other agents and also send and receive messages about those actions. Norm detectors recognize an action as a norm if and only if: (a) the observed compliance of an action (i.e. the % adopting the act in a social setting) exceeds their personal threshold, and (b) accumulated force of messages (i.e. the 'message strength') concerning that action exceeds 1. Once an action is regarded as a norm for a given social setting, a Norm Detector will adopt it regardless of what other agents are doing, although it is possible for Norm Detectors to have multiple norms for a given setting.

This is a replication of the model from chapter 7 entitled "Hunting for Norms in Unpredictable Societies" in Minding Norms: Mechanisms and Dynamics of Social Order in Agent Societies, Eds. Rosaria Conte, Giulia Andrighetto, Marco Campenni 2014, Oxford University Press.

HOW IT WORKS

In this model there are two types of agents: SOCIAL CONFORMERS (SCs) and NORM DETECTORS (NDs). Agents interact in different situations or scenarios, determined by the parameter "settings." In the original model, there are 4 situations, each of which has 3 possible actions: 2 actions available (unique to) only that setting, and 1 action available in all settings. In this model, the number of unique actions and universal actions are set by the parameters "actionspersetting" and "universal_actions," respectively.

Social conformers have no memory and adopt the most frequently chosen action by agents in their particular setting. Norm detectors have memory and select their action based on a salient norm (See below). All agents have the following attributes:

  1. agenda = personal agenda (sequence of settings randomly chosen)
  2. time_allocation = time of performance in each scenario
  3. vision = window of observation (capacity for observing and interacting with fixed number of agents)
  4. setting = which social setting the agent occupies at a given time; determines who the agent can interact with.
  5. Threshold = between 0 and 1 (salience); "frequency of the corresponding normative behaviors observed; i.e. the percentage of the compliant population" (p. 100)

ND's receive input from TWO sources: BEHAVIORS and MESSAGES. Messages are directed links sent to and from other NDs with two attributes: (1) the content (WHAT is said)- i.e. the action of the sender (who communicates to other agents via message links about that action), and (2) the means of conveying this content (HOW it is communicated). The 'HOW' attribute refers to the strength of the message, labelled "m." Varying message strengths are supposed to simulate different forms of persuasion. The original text discusses ASSERTIONS; REQUESTS; DEONTICS (evaluations of as good/acceptable and bad/unaccaptable); VALUATIONS- assertions about what is right or wrong; and finally, DEONTICS: "Every time a message containing a deontic (D) is rceived ... or a normative valuation (V) ... it will directly access the second layer of the architecture, giving rise to a candidate normative belief" (p. 99). In this model, different kinds of normative messages are simulated by the varying strengths (HOWs) of those messages. We improve upon the original model by using continuous values rather than discrete values. For example, "assertions" are simulated by messages with low m values, whereas normative valuations are simulated by messages about actions with high m values.

ND ROUTINE: I. Update Messages

1a. Send random out-message to another agent in the situation. Strength (HOW) is set between "forget" and 1. Forget = 1 / memory.

1b. Pick a random in-message (if any available), record the action (WHAT) and strength (HOW).

1c. Reset the 'Working Memory' matrix row2 (m, or message strength about each action). The update is right now produced by the following equation: "let newm oldm ^ 2 + r2" ; which means the new 'strength' (i.e. salience or accumulation) of a message is equal to the previous strength of that action (0 to 1) squared plus the strength of the new message. For example, agent i receives a message from agent j while they are in situation 2 about action 23 (third action in situation 2). The strength (HOW) of this message is .3. If the previous strength for action 23 was .5, the new strength will be: .5^2+.3=.55. The new strengths are recorded in row2 (i.e. third row) of the working_memory matrix.

II. Setup NormBoard: IF v > threshold AND m > 1, THEN store action as norm in "NORMBOARD". Va = OBSERVED COMPLIANCE, i.e. the percentage of observed agents in situation s performing action a (row0/row1 of workingmemory matrix). [See note in the 'details' about how OBSERVED COMPLIANCE is actually calcalated- there are many possibilities for this.] NEXT, if the strength (m, row2 of workingmemorymatrix) of the messages for this action are greater than 1, then it automatically becomes a new norm. Thus, observing other agents perform an action is by itself insufficient to become a norm. Agents must also receive messages about these actions. Currently the threshold value for m (strength) is 1, but this should be varied. Right now, m is the accumulated history of 'HOW's pertaining to a particular action, which updates the working_memory matrix as explained in the step above.

III. FORGETTING (NDs). For NDs, in between setting up the normboard and selecting an action is a procedure called "ndsactionforgetting." This reduces the strength of m (messages- row2 of workingmemory matrix) over time by a constant factor. In the future, an exponential function may be implemented. Right now, m for every action is weakened by a factor of f, where f = 1/memory. So, if memory is set to 5, m for each action is weakened by .2 each tick. To compensate for this, reporter "m_strength" for new out-messages is set between f and 1. So, if memory is only 2, m is weakened by .5 each turn, but every new message received is also some number between .5 and 1.

IV. Choose action: If agent has a normboard, agent chooses the salient action (for that situation) from its normboard with the highest "m" (strength) value.

(5.) Forgetting. NDs have an "action_history" list which records their previous actions up to length "memory." If the length exceeds their memory, then the most distant action is removed. CURRENTLY, ACTION HISTORY IS JUST A RECORDING DEVICE AND HAS NO FUNCTIONALITY.

DETAILS

I. HOW IS OBSERVED COMPLIANCE MEASURED?. In some ways, "salience" is presupposed, because only in certain settings as some actions possible. Thus, we presume that only some actions are "salient" given the social setting! Here 'salience' is given by the "frequency of the corresponding normative behaviors observed; i.e. the percentage of the compliant population" (p. 100) This is ambiguous. I can think of at least 4 ways that observed compliance might be modeled. First, we can use either absolute or relative frequencies of compliance- i.e. the threshold for agents may correspond to absolute numbers or to percentages. Second, we can use frequencies of compliance for agents in that given setting or for all agents across all settings- which changes things radically! Cross-tabulating yields 4 possibilities.

Here we opt for a 5th measure: the relative compliance observed in a situation over the entirety of the duration. In other words, we calculate TOTAL COMPLIANCE as follows: let ca = agents compliant to action a, and n = total agents in a given setting. TC = [ca (t0) + ca (t1) + ca (t2) ...] / n (t0) + n (t1) + n (t2) ...

Moreover, this can be done in two ways: (1) each agent observes only 1 at a time, and so the relative frequencies are updated by 1 each tick; or (2) each agent observes all at once all of the compliant agents for each action in a given setting, and then the relative frequencies are updated en mass, using the same procedure as before. We here choose 'one at a time' to keep the numbers low.

II. The statistic "most popular action" is a little misleading because it compares universal actions, which all turtles can perform, versus actions embedded in specific situations, which only a fraction of turtles can perform. For example, if you check to see what actions turtles adopt initially, more of them will adopt one of the "universal actions" (e.g. action 1) only because that option is available to all turtles, whereas the other actions are only available as possible actions for the number of turtles in that specific setting.

III. The actions are labeled in the "actionsm" matrix and "actionsl" list as follows: universal actions are labelled from 0 --> 9; actions unique to setting 1 are labelled from 11 to 19; actions unique to setting 2 are labelled 21 to 29; actions in setting 3, 31 to 39, etc. {Using matrices might have been an unnecessary complication, but it worked out nicely. Each ND has a matrix called "workingmemory." The WORKINGMEMORY MATRIX has 3 rows and j columns. Each column represents a possible action (in that given setting). We keep track of which column (0 to j) and match it up with the position on the actionsm matrix and actionsl global lists. Row 0 of the "working_memory" matrix is the number of agents observed performing action j. Row 1 of the matrix is the total number of agents in that setting. Row 0 is the numerator, and Row 1 is the denominator. Row 0 / Row 1 = prnctg frequency of agents in a setting performing j action. The WM is always reset when changing social settings! Row 2 (i.e. 3rd row) of the WM matrix is the message strength row.

IV. Finally, the interpretation given to this model can be challenged. In the original model, NDs converge on the one 'universal action' available across all social situations. Authors cite “standing in line” as an example, or “answering when called upon." They are arguing that ND’s learn norms faster than SC’s, but the problem is obviously that they are just defining ‘norms’ as any action common to the multiple settings. In real life, universal actions and situation-specific actions are not mutually exclusive, but rather presuppose each other. In their model, if agents choose 'standing in line' the agents do so instead of doing whatever else they were going to do- i.e. agents stand in line instead of doing what they are waiting in line for, i.e. the action specified by the situation and which makes the situation unique in the first place. Put another way, if all agents choose the universal action in every setting, then there are no longer multiple settings!

THINGS TO NOTICE

The main finding of this model is that agents capable of internalizing and memorizing salient social norms (norm "immergence" similar to second-order emergence or awareness) are better at converging on behaviors than are simple social conformers.

This works for the conditions established in the original study, in which there are 4 situations with 2 possible unique actions each and 1 universal action. But does this also hold true when there are many universal actions, zero universal actions, or many possible situation-specific actions? The results seem to critically depend upon the existence of 'universal actions' not specific to any particular situation.

EXTENDING THE MODEL

PARTNER SELECTION: Right now "messages" are randomly received- AGENTS COMMUNICATE TO OTHER randomly targeted agents their own actions at randomly varying strength (m). Agents do not select the partners with whom they communicate. Nor do agents communicate indirect, third-person information about others- rather, they only communicate their own actions. It would be fascinating to see what happens when action and communication are differentiated.

NETLOGO FEATURES

Matrix.

RELATED MODELS

(models in the NetLogo Models Library and elsewhere which are of related interest)

CREDITS AND REFERENCES

"Hunting for Norms in Unpredictable Societies" in Minding Norms: Mechanisms and Dynamics of Social Order in Agent Societies, Eds. Rosaria Conte, Giulia Andrighetto, Marco Campenni, 2014, Oxford University Press.

Comments and Questions

Note About this Model

This model uses the matrix extension and so doesn't currently run in NetLogo Web.

Posted almost 8 years ago

Click to Run Model

extensions [matrix]
directed-link-breed [messages message]
messages-own [what how]
breed [NDs ND] ; ND = norm detectors
breed [SCs SC];  SC = Social Conformers
globals [
  actions_l  ; actions list
   actions_m  ; actions matrix
   forget
   t  ;; total actions
   ]
turtles-own [
  agenda ; sequence for each social setting; in this case, each setting visited no more than once, thus it is a PATH (not a trail or a random walk)
  time_allocation ; percentage time distributed across each setting, summing to number_of_ticks (100%)
  time_points ; list of ticks at which setting changes for agent, the running sum of time_allocation
  ; max_partners = # of potential interaction partners (may be constant or vary)
  setting ; attached to each agent, indicates which social setting the agent occupies at a given time
  setting_history
  counter ;; records the current item in time_points
  ;NOTE:  "SETTING" = THE CURRENT SITUATION OF THE TURTLE AT TIME T.  "SETTINGS" IS THE GLOBAL PARAMETER SPECIFYING HOW MANY TOTAL SETTINGS EXIST.
  action_history ; records actions of the agent - agents will observe the most recent action of agents in its particular setting
  norm_board
  working_memory ;; = working memory; observed behaviors or messages of others are stored here until time "memory"
  threshold ;; SALIENCE; "frequency of the corresponding normative behaviors observed; i.e. the percentage of the compliant population" (p. 100) -
  ACTION
  ]

to setup

 clear-all
 reset-ticks

 set-default-shape turtles "person"

 let pop population
 let pop_nd round ((.01 * Percentage_ND) * population)
 create-NDs pop_nd
 let pop_sc population - (count NDs)
 create-SCs pop_sc

 set forget 1 / memory
 set t actions_per_setting + universal_actions

 ask NDs [set color blue]
 ask SCs [set color red]


  ask turtles [

  set size 1.5
 let close min-one-of other turtles [distance myself]
 while [distance close < 1]
[let r random 360
  set heading r
  fd 1
  set close min-one-of other turtles [distance myself]
  ]

  ]
setup_actions

setup_attributes

setup_WM  ; working memories
end 

to setup_WM
  ;; row 1 = c_a (observed compliant actions)
  ;; row 2 = n (observed agents) in this model, always observes only 1, so updates across all columns + 1 per tick
  ;; row 3 = m (message strength); accumulates over time;  need to weight by time..
  ask nds [
    set working_memory matrix:make-constant 3 t 0
    set norm_board []
  ]
end 

to setup_actions
  ;; creating ACTIONS
;; actions 0, 1, 2, etc for common/universal actions
;; actions 11, 12, 13, etc. for scenario 1;  21, 22, 23... for scenario 2, and so on.
;;  first, create a global list of possible actions.. then, have each agent choose one randomly and record it, depending on their situation.
let s settings
let hlist [] let b 11
repeat s [let nlist n-values t [ d_i -> d_i + b ] set hlist lput nlist hlist  set b b + 10]
set actions_m matrix:from-row-list hlist

set actions_l []
let i 1
let i2 actions_per_setting
repeat universal_actions [
let ulist n-values s [i]
matrix:set-column actions_m i2 ulist
set i i + 1
set i2 i2 + 1
]

let i3 1   ;; because procedure a_list below substracts 1
repeat s [
  let alist a_list i3
set actions_l lput  alist actions_l
set i3 i3 + 1
]

set actions_l reduce [ [?1 ?2] -> (sentence ?1 ?2) ] actions_l
set actions_l remove-duplicates actions_l
set actions_l sort actions_l
;show actions_l
end 

to setup_attributes
  ask turtles [set threshold random-float .7]  ;  thresholds are between 0 and 70%.

  set_agenda
  set_time
end 

to set_agenda

ask turtles [
  let s settings
  let s_list []
  let i 1
  while [length s_list < s] [set s_list lput i s_list set i i + 1] ;; creates a list 1 --> n, # settings
set agenda []
while [length s_list > 0] [
let n one-of s_list
set s_list remove n s_list
set agenda fput n agenda ]
    ]

ask turtles [
  set setting item 0 agenda]
end 

to set_time
 ;; must distribute available ticks to each social setting
; here I need to distribute the ticks over s settings, creating a list "time_allocation"
; To do this, I go over each position in the list, deciding with 50-50 probability whether to add 1 or 0, until all of the ticks are gone.
let s settings

ask turtles[
  set action_history []
  let n number_of_ticks
  set time_allocation []
  repeat s [set time_allocation fput 0 time_allocation]
  let i 0 ; item # in list
  while [n > 0] [
      let iv item i time_allocation + 1
      let p random 2 ;  creates 0 or 1
      if p > 0 [set time_allocation replace-item i time_allocation iv
        set n n - 1
             ]
      ifelse i >= (s - 1) [set i 0] [set i i + 1]
  ]
  set time_points []
  set setting_history []
  set counter 0 ; item 0 in time_points
  set setting item counter agenda

    ;;setting random action corresponding to initial setting
  let row setting - 1 ;corresponds to row in actions matrix
  let action_p matrix:get-row actions_m row
  set action one-of action_p
  set action_history lput action action_history
]


ask turtles [
  let i 0
  repeat s - 1 [
  let new_list sublist time_allocation 0 (i + 1)
  let new_total sum new_list
  set time_points lput new_total time_points
  set i i + 1

  ]
  set time_points lput number_of_ticks time_points

]
end 

to start
  ifelse ticks >= number_of_ticks [stop]

  [


move_to_group ; [code taken from "Grouping Turtles Example"]
interact
set_setting

   tick
  ]
update_plots
end 

to interact
  ask-concurrent turtles [
    ifelse breed = nds [nds_action] [scs_action]
  ]
end 

to nds_action

 let s [setting] of self
 let sd s - 1
 let n_update n-values t [1] ;; this creates a list [1 1 1] which I use to add to the second row (demoninator)

nds_action_update_denominator
nds_action_update_numerator
nds_action_update_messages
nds_action_setup_norm_board
nds_action_forgetting
nds_action_select
end 

to-report alters [scs?]   ;; if scs? = 1, then possible partners = all turtles; if = 0, then only nds.
   let s [setting] of self
   let partners nobody
   ifelse scs? = 1 [set partners other turtles with [setting = s]]
   [set partners other nds with [setting = s]]

  ifelse partners = nobody [report self] [report one-of partners]
end 

to-report a_list [s] ;; reports the available actions in a setting ('situation')

  let sit s - 1
  report matrix:get-row actions_m sit
end 

to-report m_strength  ;; may want to tweak
report random-in-range forget 1
end 

to nds_action_update_denominator
  ;;updating denominator, row 1 (i.e. the second row)
  let update_d matrix:get-row working_memory 1  ;; get the values of the 'n' row making a list..
  let new_d map [ ?1 -> ?1 + 1 ] update_d
  matrix:set-row working_memory 1 new_d
end 

to nds_action_update_numerator  ;; can observe actions of ALL TURTLES (not just NDS)
  ;;updating numerator (row 0, i.e. first row) ;; observed action
  ;; must record the position of this action, from the actions_m, so we update the WM in the right column
 let s [setting] of self
; let partner alters
 let alist a_list s  ;reporter
 let c_a [action] of alters 1
 let p position c_a alist ;; gets the column position of action c_a from the actions_m matrix, and then uses
  ;; that same position to update the working_memory column, row 0.

  ifelse member? c_a alist [
  let old_value matrix:get working_memory 0 p
  let new_value old_value + 1
  matrix:set working_memory 0 p new_value
  ]
  [ ]  ;; if values aren't legal, then skip...
end 

to nds_action_update_messages
  ;;updating messages, row 2; observed communications
  ;;agents with norms communicate messages!  w
  ;; right now, randomly assign arbitrary value to random column of row 2
  ;; UPDATE, RECEIVING MESSAGE REGARDING ACTION OBSERVED
 let s [setting] of self
 let sd s - 1
 let partner alters 1  ;;
  create-message-to partner [  ;; for ND's, alter is set to only other ND's to send a message to; SCs do not process messages.
   set what [action] of end1  ;;  setting the "WHAT" attribute as the action of the sender
   set how m_strength  ;; m_strength is a random variable
 ];; in this case, turtle is SENDING MESSAGE about its own current action

  let r1 random t ;; 0 to t-1  ; random action column
  let r2 m_strength ;; THESE VALUES WILL BE RANDOM ONLY IF TURTLE HAS NO IN-MESSAGES

  if count my-in-messages > 0 [  ;; SELECTING an incoming message regarding action and updating working memory
    let my_m one-of my-in-messages
    let my_what [what] of my_m  ;action
    let my_how [how] of my_m ; m, strength of message
    if member? my_what a_list s[  ;; if the message is an about an action in the current setting...
      set r1 position my_what a_list s   ;; DOUBLE CHECK, reporter
      set r2 [how] of my_m
    ]

  let old_m matrix:get working_memory 2 r1
  ifelse old_m < 1 [
    let new_m old_m ^ 2 + r2
    matrix:set working_memory 2 r1 new_m]
  [ ]  ;; otherwise do nothing, leave as is if above 1.
    ;matrix:set working_memory 2 r1 1]  ;; alternative:  setting values to 1 if not below 1
  ]
end 

to nds_action_setup_norm_board
 let s [setting] of self
 let sd s - 1

 ;; now, must calculate a new vector (row) that is (row 0) / (row 1), or v=c_a/n.  To do this, a new vector from each row must be created first.
 ;; Procedure, IF v > threshold AND m > 1, THEN store action as norm in "NORM_BOARD"

 let row0 matrix:get-row working_memory 0  ;; frequency
 let row1 matrix:get-row working_memory 1  ;; denominator (total cases)
 let row2 matrix:get-row working_memory 2  ;; message strength

 let freq (map / row0 row1)   ;; a new list, each item is c_a/n, for each action-  actions are recorded by their position in the list.
 foreach freq [ ?1 -> if ?1 > threshold [
     let th_a position ?1 freq  ;; position of the action crossing the threshold value
     let p_a item th_a row2  ;; check the strength of this action
     if p_a > 1 [
       let new_norm matrix:get actions_m sd th_a
       ;; records the action listed in the action_m matrix, in setting s (in row (s-1),) column th_a
     ifelse member? new_norm norm_board [] [set norm_board fput new_norm norm_board  ;; if its new, record it as new norm
       set norm_board remove-duplicates norm_board  ;; clearning up
       set norm_board sort norm_board ;; cleaning up

       ]
 ]] ]
end 

to nds_action_forgetting  ;; this is to reduce the strength of m over time by a constant factor
  ;;forgetting...

  let m_row matrix:get-row working_memory 2
  set m_row map [ ?1 -> ?1 - forget ] m_row
  foreach m_row [ ?1 -> if ?1 < 0 [let b position ?1 m_row set m_row replace-item b m_row 0] ]

  matrix:set-row working_memory 2 m_row
end 

to nds_action_select
  let s [setting] of self
   let a s - 1 ;; = row for setting in actions_m matrix
  ;; prefers to select norm in given situation; if norm_board empty, nds act like scs; another possiblity is that they choose randomly
  ifelse empty? norm_board [scs_action]
  [
    let alist a_list s ;reporter
    let afilter filter [ ?1 -> member? ?1 norm_board ] alist  ;; this filters out all actions in the norm_board not appropriate for that setting

    if empty? afilter [set afilter alist]
  ;; choose afilter item with highest m score in working memory;
  ;; step 1, find positions of each in actions_m (row s - 1)
  ;; step 2, record values for identical positions in row2 of working memory\
  ;; step 3, highest value is selected...  find position for this value again
  ;; step 4, record value (i.e. action) for same position in actions_m (row s -1)
  ;; choose norm with highest m in working memory if more than one relevant norm


   let wm matrix:get-row working_memory 2
   let am matrix:get-row actions_m a

 ifelse length afilter > 1 [
    ;; e.g. actions 21 and 23 in setting 2 are in norm_board, how to choose between them?
    ;; procedure:  find highest m in row 2 of working_memory; record position and find corresponding action in actions_m (row s-1, col ?)
    ;; IF action(i) is member? of norm_board, then select action(i).
    ;; IF NOT, then repeat...

   let norm_positions []
  foreach afilter [ ?1 -> let p position ?1 am set norm_positions fput p norm_positions ]
  let wm_values []
  foreach sort norm_positions [ ?1 -> let v item ?1 wm set wm_values fput v wm_values ]
  let max_v max wm_values
  let max_p position max_v wm  ;;  be careful, if same values exist for multiple actions, then could run into problems
  let new_action item max_p am
  if member? new_action afilter [set action new_action]


  ]
  [
    set action one-of afilter
    ]
  ;set action
  set action_history fput action action_history

  ]

  forgetting
end 

to scs_action

  let s [setting] of self
  let my_action [action] of self
  let partners turtles with [setting = s]  ;including self
  let action_list []

 ; let a s - 1 ; corresponds to the row # with possible actions for that setting in the actions_matrix
  let alist a_list s
  let new_list [action] of partners
  let cfilter filter [ ?1 -> member? ?1 new_list ] alist  ;;VERY IMPORTANT!  This basically excludes all actions of partners that aren't allowed in that setting..
  if empty? cfilter [set cfilter alist]
 ; let new_action modes [action] of partners
  let n_action one-of cfilter ;; chooses just one mode if a tie
  set action n_action
  set action_history fput action action_history

  forgetting
end 

to forgetting

    if length action_history > memory [let i memory - 1 set action_history remove-item i action_history]
end 

to set_setting   ;; moving turtle around asynchronously from situation to situation
    ; must find item # in the time_points list correspondin to ticks
    ; if ticks > item 0, then go to item 1; if ticks > item 1, then go to item 2, and so on..
    ; until we rearch the highest value in the list which is less than ticks
    ; then we record item #, and set setting = item i of agenda

    ; Example, turtle 0: agenda = [0 3 1 2]; time_allocation (out of 10) = [2 3 2 3]; time_points = [2 5 7]
    ; Suppose ticks = 8, then setting of turtle 0 will be 2.  Why?  Because ticks > item 2 on time_points,
    ; which means that we set the agenda to item #3 on agenda.  Item 3 = 2.  Therefore, setting for turtle 0 = 2.

  ask turtles [
let ti item counter time_points
if ticks > ti [
  set counter counter + 1
  set setting item counter agenda

  ;; NEED TO RESET WORKING MEMORIES!
  ;; NEED TO RESET MY-IN-MESSAGES:  in this model, communications are only allowed about actions available in the setting

  set working_memory matrix:make-constant 3 t 0
  ask my-in-messages [die]
]
  set setting_history lput setting setting_history

  ]
end 

to move_to_group
  ask-concurrent turtles [move-to get-home
      ;; wiggle a little and always move forward, to make sure turtles don't all
    ;; pile up
    lt random 5
    rt random 5
    fd 1
  ]
end 

;; figures out the home patch for a group. this looks complicated, but the
;; idea is simple. we just want to lay the groups out in a regular grid,
;; evenly spaced throughout the world. we want the grid to be square, so in
;; some cases not all the positions are filled.

to-report get-home ;; turtle procedure
  ;; calculate the minimum length of each side of our grid
  let side ceiling (sqrt (max [setting] of turtles + 1))

  report patch
           ;; compute the x coordinate
           (round ((world-width / side) * (setting mod side)
             + min-pxcor + int (world-width / (side * 2))))
           ;; compute the y coordinate
           (round ((world-height / side) * int (setting / side)
             + min-pycor + int (world-height / (side * 2))))
end 

to-report random-in-range [low high]
  report low + random-float (high - low)
end 

to-report SC-freq  ;;report how many choose most popular action
  let c count SCs
  let newlist []
  foreach sort actions_l
  [ ?1 -> let v count SCs with [action = ?1]
    set newlist lput v newlist ]
   let SC_max max newlist ;; this is how many SCs choose the most popular action among them
   let SC_p position SC_max newlist ;; this identifies the position on the list of the most popular action among SCs
   let SC_action position SC_p actions_l
   report SC_max
  ; report SC_action
end 

to-report SC_pop_action ;; most popular action among SCs
    let c count SCs
  let newlist []
  foreach sort actions_l
  [ ?1 -> let v count SCs with [action = ?1]
    set newlist lput v newlist ]
   let SC_max max newlist ;; this is how many SCs choose the most popular action among them
   let SC_p position SC_max newlist ;; this identifies the position on the list of the most popular action among SCs
   let SC_action item SC_p actions_l
   report SC_action
end 

to-report ND-freq  ;;report how many choose most popular action
  let c count NDs
  let newlist []
  foreach sort actions_l
  [ ?1 -> let v count NDs with [action = ?1]
    set newlist lput v newlist ]
   let ND_max max newlist ;; this is how many SCs choose the most popular action among them
   let ND_p position ND_max newlist ;; this identifies the position on the list of the most popular action among SCs
   let ND_action position ND_p actions_l
   report ND_max
  ; report SC_action
end 

to-report ND_pop_action ;; most popular action among SCs
    let c count NDs
  let newlist []
  foreach sort actions_l
  [ ?1 -> let v count NDs with [action = ?1]
    set newlist lput v newlist ]
   let ND_max max newlist ;; this is how many SCs choose the most popular action among them
   let ND_p position ND_max newlist ;; this identifies the position on the list of the most popular action among SCs
   let ND_action item ND_p actions_l
   report ND_action
end 

to update_plots
  set-current-plot "Convergence Rate"
  set-current-plot-pen "social conformers"
  let c1 count SCs
  if c1 = 0 [set c1 1]
  let f SC-freq
  let prcnt_sc (f / c1) * 100
  plot prcnt_sc

  set-current-plot-pen "norm detectors"
  let c2 count NDs
  if c2 = 0 [set c2 1]
  let f2 ND-freq
  let prcnt_nd (f2 / c2) * 100
 plot prcnt_nd
end 

There are 2 versions of this model.

Uploaded by When Description Download
John Bradford almost 8 years ago Updating to version 6 Netlogo Download this version
John Bradford almost 8 years ago Initial upload Download this version

Attached files

File Type Description Last updated
Minding Norms (Hunting for Norms).png preview Preview for 'Minding Norms (Hunting for Norms)' almost 8 years ago, by John Bradford Download

This model does not have any ancestors.

This model does not have any descendants.