Skip to primary content
Skip to secondary content

Scientific Gems

Facts, ideas, and images from the shoreline of science.

Scientific Gems

Main menu

  • Home
  • About
  • Books & Games
  • Calendars
  • Homeschool Resources
  • Publications
  • Solar Racing
    • American Solar Challenge 2021
    • European Solar Challenge 2020
    • World Solar Challenge 2019
    • Sasol Solar Challenge
    • Carrera Solar Atacama 2018
  • Stuff
    • Bipartite networks
    • NeuQuant: Fast High-Quality Image Quantization

Tag Archives: NetLogo

Post navigation

← Older posts

Sequent Calculus in NetLogo

Posted on May 19, 2015 by Tony
1

NetLogo programs sometimes require agents to reason, and one of the simplest forms of reasoning is the sequent calculus. This uses expressions of the form α, β, γ, δ ⊢ ε, ζ, η which are interpreted as α ˄ β ˄ γ ˄ δ ⇒ ε ˅ ζ ˅ η. That is, all the hypotheses α, β, γ, δ are assumed to be true, and from that it should follow that at least one of the conclusions ε, ζ, η is true.

We can resolve the truth of sequents like α, β, γ, δ ⊢ ε, ζ, η using rules which simplify occurrences of ¬ φ (not φ), φ ˄ ψ (φ and ψ), φ ˅ ψ (φ or ψ), and φ ⇒ ψ (φ implies ψ) occurring in the hypotheses and/or conclusions, until we are left with a number of sequents like α, β, γ, δ ⊢ ε, ζ, η containing only atomic formulae. Such an expression is true if one of the atomic formulae on the left also occurs on the right (or, in more advanced versions of the logic, if the atomic formulae on the left somehow imply one of the atomic formulae on the right, in the way that x = y and y = z imply x = z).

The simplification rules are as follows. Note that some rules generate two new sequents, and that the rule for not flips things from the hypotheses to the conclusions or vice versa:

We can code these up easily in NetLogo, using the list [ "and" φ ψ ] to represent φ ˄ ψ, and so forth. The following function takes 4 lists (atomic and non-atomic hypotheses; atomic and non-atomic conclusions) and applies the simplification rules to the hypotheses (with some printing for debugging):

to-report sequent-prove-a [ atomic-hyp non-at-hyp atomic-conc non-at-conc ]  ; split hypotheses
  nice-print "a" atomic-hyp non-at-hyp atomic-conc non-at-conc ""
  ifelse (non-at-hyp = [])
    [ report sequent-prove-b atomic-hyp atomic-conc non-at-conc ]
    [ let p (first non-at-hyp)
      set non-at-hyp (but-first non-at-hyp)
      ifelse (is-string? p)  ; atomic
        [ report sequent-prove-a (fput p atomic-hyp) non-at-hyp atomic-conc non-at-conc ]
        [ ifelse (first p = "not")  ; not q
            [ let q (item 1 p)
              report sequent-prove-a atomic-hyp non-at-hyp atomic-conc (fput q non-at-conc) ]
            [ ifelse (first p = "and")  ; q /\ r
                [ let q (item 1 p)
                  let r (item 2 p)
                  report sequent-prove-a atomic-hyp (fput q (fput r non-at-hyp)) atomic-conc non-at-conc ]
                [ ifelse (first p = "or")  ; q \/ r
                    [ let q (item 1 p)
                      let r (item 2 p)
                      report (sequent-prove-a atomic-hyp (fput q non-at-hyp) atomic-conc non-at-conc)
                         and (sequent-prove-a atomic-hyp (fput r non-at-hyp) atomic-conc non-at-conc) ]
                    [ ifelse (first p = "implies")  ; same as (not q) \/ r
                        [ let q (item 1 p)
                          let r (item 2 p)
                           report (sequent-prove-a atomic-hyp (fput r non-at-hyp) atomic-conc non-at-conc)
                             and (sequent-prove-a atomic-hyp non-at-hyp atomic-conc (fput q non-at-conc)) ]
                        [ error "Unexpected logical" ]
                    ]
                ]
            ]
        ]
    ]
end

Once all hypotheses are atomic, the following function applies the simplification rules to the conclusions (for some cases it must call sequent-prove-a again):

to-report sequent-prove-b [ atomic-hyp atomic-conc non-at-conc ]  ; split conclusions
  nice-print "b" atomic-hyp [] atomic-conc non-at-conc ""
  ifelse (non-at-conc = [])
    [ report sequent-prove-c atomic-hyp atomic-conc ]
    [ let p (first non-at-conc)
      set non-at-conc (but-first non-at-conc)
      ifelse (is-string? p)  ; atomic
        [ report sequent-prove-b atomic-hyp (fput p atomic-conc) non-at-conc ]
        [ ifelse (first p = "not")  ; not q
            [ let q (item 1 p)
              report sequent-prove-a atomic-hyp (list q) atomic-conc non-at-conc ]
            [ ifelse (first p = "and")  ; q /\ r
                [ let q (item 1 p)
                  let r (item 2 p)
                  report (sequent-prove-b atomic-hyp atomic-conc (fput q non-at-conc))
                     and (sequent-prove-b atomic-hyp atomic-conc (fput r non-at-conc)) ]
                [ ifelse (first p = "or")  ; q \/ r
                    [ let q (item 1 p)
                      let r (item 2 p)
                      report sequent-prove-b atomic-hyp atomic-conc (fput q (fput r non-at-conc)) ]
                    [ ifelse (first p = "implies")  ; same as (not q) \/ r
                        [ let q (item 1 p)
                          let r (item 2 p)
                          report sequent-prove-a atomic-hyp (list q) atomic-conc (fput r non-at-conc) ]
                        [ error "Unexpected logical" ]
                    ]
                ]
            ]
        ]
    ]
end

Once both hypotheses and conclusions are atomic, checking truth just needs a simple intersection test:

to-report sequent-prove-c [ atomic-hyp atomic-conc ]
  let proved? false
  foreach atomic-conc [
    if ((not proved?) and member? ? atomic-hyp) [ set proved? true ]
  ]
  ifelse (proved?)
    [ nice-print "  c" atomic-hyp [] atomic-conc [] "   ** proved **" ]
    [ nice-print "  c" atomic-hyp [] atomic-conc [] "   failed!" ]
  report proved?
end

Printing requires some code to make sequents look nice:

to-report nice-item [ p ]
  ifelse (is-string? p)
    [ report p ]
    [ ifelse (first p = "not")
        [ let q (item 1 p)
          report (word "~ " (nice-item q)) ]
        [ ifelse (first p = "and")
            [ let q (item 1 p)
              let r (item 2 p)
              report (word "(" (nice-item q) " ^ " (nice-item r) ")") ]
            [ ifelse (first p = "or")
                [ let q (item 1 p)
                  let r (item 2 p)
                  report (word "(" (nice-item q) " v " (nice-item r) ")") ]
                [ ifelse (first p = "implies")
                    [ let q (item 1 p)
                      let r (item 2 p)
                      report (word "(" (nice-item q) " -> " (nice-item r) ")") ]
                    [ error "Unexpected logical" ]
                ]
            ]
        ]
    ]
end

to-report nice-list [ p ]
  ifelse (p = [])
    [ report "" ]
    [ report (reduce [ (word ?1 ", " ?2) ] (map nice-item p)) ]
end

to nice-print [ lab x y z w t]
  if (verbose?) [
    print (word lab ": " (nice-list (sentence x y)) " |- " (nice-list (sentence z w)) t)
  ]
end

For demonstration purposes, the go button computes some basic facts about two turtles, combining this with a user-supplied list of hypotheses, and tries to prove a user-supplied conclusion (in this case [ "and" [ "implies" "C" "F" ] [ "not" [ "not" "Far" ] ] ], representing (C ⇒ F) ˄ ¬ ¬ Far.

globals [
  computed-facts
  result?
]

to go
  if (verbose?) [ print "------------------------------------------------------------------------" ]
  clear-all
  ask patches [ set pcolor white ]
  create-turtles 1 [ set color blue set size 3 set shape "person" setxy random-xcor random-ycor ]
  create-turtles 1 [ set color red set size 2 set shape "person" setxy random-xcor random-ycor ]

  set result? "?"
  ask (turtle 0) [ compute-basic-facts ]

  let hypotheses (sentence computed-facts (read-from-string other-hypotheses))
  set result? sequent-prove-a [] hypotheses [] (list (read-from-string conclusion))
end

to compute-basic-facts  ; calculated by a specific turtle
  let fact1 (ifelse-value (xcor < 0) [ "Left" ] [ "Right" ])
  let d (distance (turtle 1))
  let fact2 (ifelse-value (d < 4) [ "Near" ] [ "Far" ])
  set computed-facts (list fact1 fact2)
end

The NetLogo interface is shown at the top of this page, and the code is at Modelling Commons. The simple sequent calculus reasoner presented here can be fairly easily extended with quantifiers and other features that might be necessary.

See also my other NetLogo tutorials.

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged Logic, NetLogo, Sequent Calculus, Simulation, Tutorials | 1 Reply

List processing in NetLogo: some more examples

Posted on November 16, 2014 by Tony
Reply

I have previously posted a NetLogo lists tutorial, but I thought I’d post a few more NetLogo list examples, highlighting the primitives reduce, map, and n-values.

reduce

The reduce primitive converts the one-element list [ x ] to x, but more interesting is what reduce does with the list L = [ x1 x2 x3 … xn ]. Given a binary operator ⊕, L is converted to:

((x1 ⊕ x2) ⊕ x3) … ⊕ xn

Therefore reduce + L, equivalent to reduce [?1 + ?2] L, adds up all the elements of a list, just like sum L. However, we can invent our own binary operators. For example, the code reduce [10 * ?1 + ?2] L converts a list of digits like [ 2 6 0 4 3 ] to the corresponding integer (26043).

A useful variation is reduce ⊕ (fput e L), which calculates:

(((e ⊕ x1) ⊕ x2) ⊕ x3) … ⊕ xn

This form allows the operator ⊕ to take arguments of two different kinds. Using a reversed version of fput, for example, gives a (slow) way of reversing a list:

reduce [ fput ?2 ?1 ] (fput [] lst)

map

The map primitive has two forms. Given a function f and a single list (map f L), the list
L = [ x1 x2 x3 … xn ] is converted to:

[ f x1    f x2    f x3  …  f xn ]

The execution is done left-to-right, and so the f xi can be linked together if f has side-effects. For example, consider the following code, which defines a reporter (function) add-to-total which adds its argument to a global variable, and then returns the value of that variable:

globals [ total ]

to-report add-to-total [ x ]
  set total (x + total)
  report total
end

If we initialise total to 0, then map add-to-total L (or map [ add-to-total ? ] L) will construct a list of partial sums of L, converting e.g. [ 2 6 0 4 3 ] to [ 2 8 8 12 15 ]. For a more complex example, we can perform the ISBN check-digit calculation I recently posted about:

Given an ISBN in the form of a string, we can extract the last digit (the check digit) using last, converting it to a number with read-from-string if it is not “X”, and using but-last to get the other 9 digits:

let isbn "0936385405"
let chk (last isbn)
if (chk != "X") [ set chk (read-from-string chk) ]
let digits (but-last isbn)

To expand out these digits, we use n-values. This is a lot like map, except it takes a number n as input, forms the list [ 0 1 2 … n–1 ], and applies a function to the elements of that. Here we use n-values twice: once to index all the possible positions in the string digits, converting it to a list of numbers; and once to generate the list [ 10 9 8 … 2 ]:

set digits (n-values (length digits) [ read-from-string (item ? digits) ])
let multiplier (n-values 9 [ 10 - ? ])

We use map with two lists (in parentheses to indicate that there is more than one list) to multiply pairs of elements from these two lists, and sum to add up the products:

let computed (sum (map [ ?1 * ?2 ] digits multiplier))

Finally, the negation of this sum mod 11 is the required check digit (with 10 represented by “X”):

set computed ((- computed ) mod 11)
if (computed = 10) [ set computed "X" ]
ifelse (computed = chk)
	[ print (word isbn " is OK") ]
	[ print (word isbn " should end in " computed ", not " chk) ]

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged ISBN, NetLogo, Simulation, Tutorials | Leave a reply

Simulating the World Solar Challenge

Posted on June 23, 2014 by Tony
Reply

Inspired by the recent guest post by Georg Russ at solarracing.org, I have built a (very simplistic) simulation of the World Solar Challenge in NetLogo. The model and associated map file can be downloaded here.

The image below shows a snapshot of the simulation (in which I have completely ignored control stops). There are three graphs on the left. The first graph plots energy from the solar panels (following the discussion by Georg Russ). The second graph plots the speed of Car 1 (the blue car, running at 75 km/h) as well as the battery state (as a percentage of full charge).

The third graph (in the style of my race charts for WSC 2013) plots distance from Darwin on the horizontal axis with time (relative to an 80 km/h baseline speed) on the vertical axis. In other words, a vertical position of 1 hour means that a car is running 1 hour behind the baseline speed. A steady 80 km/h speed would thus be indicated by a horizontal line, with faster speeds sloping downwards and slower speeds sloping upwards. The graph shows that Car 3 (pink) has been running ahead of the baseline speed, but only by draining its battery. A close examination of the graph shows that Car 3 has already been forced to slow down. This highlights the need to strategically choose car speed, for the reasons discussed by Georg Russ.


The afternoon of Day 1: Car 3 (pink) is leading, but has started to slow because of a drained battery (click to zoom)

For a video of the running simulation, see here.

One feature of the race is that the early-morning and late-evening sunshine can provide substantial charge, if the panels are tilted to face the sun. It will be interesting to see how the new WSC rules impact this practice.


Nuon Solar Team gathering sunshine during WSC 2013 (photo: Jorrit Lousberg)

The image below shows a second snapshot of the simulation on day 5, after Car 1 (blue) has won. Notice that the solar energy has varied with the (imaginary) weather conditions (top graph), causing Car 1 to slow down with a drained battery on the fourth day (middle graph). This highlights the importance of weather forecasting in race strategy – it would have been better to run at a sustainable steady speed. In 2013, Solar Team Twente was assisted in this regard by an attached military weather forecaster – who blogged his (Dutch) story here (robot-translated here).


The afternoon of Day 5: Car 1 (blue) has won (click to zoom)

To underscore the weather issue even more, here is a map (from the Australian Bureau of Meteorology) of solar exposure on the rainy fifth day of WSC 2013:

Below is my race chart for WSC 2013, where the baseline speed was 97 km/h (which equates to an overall average speed of 85 km/h when forced waits at control stops are included). Cruiser Class entries (shown as dashed lines) were treated exactly as if they were in the Challenger Class, which means that for those cars the chart tells us road position, but is not very helpful on speed. Since the vertical axis represents hours behind the baseline speed, arrival times can be read off on the right-hand scale. Clearly visible on the graph are a miscalculation by Team Tokai concerning race strategy on the rainy fifth day, and some problems experienced by Michigan and Punch Powertrain.

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Remarks | Tagged NetLogo, Remarks, Simulation, World Solar Challenge | Leave a reply

Revisiting Artificial Anasazi: a NetLogo tutorial (part 2)

Posted on June 10, 2014 by Tony
1

This post continues my tutorial revisiting the famous “Artificial Anasazi” model of Axtell et al. (2002), ported to NetLogo by Marco Janssen in 2008 (see his paper). I have rewritten the model for explanatory purposes (though very much inspired by Janssen’s work). Part 1 of the tutorial looked at the patch dynamics, and we now turn to modelling the people.

The model studies the population of Anasazi farmers living in Long house Valley (near Black Mesa in Arizona) between 800 and 1350 AD. The image below shows the NetLogo view of this valley, as modelled for 1000 AD, with each person representing a household. For purposes of simplicity, I am not modelling the availability of drinking water and the locations of dwellings. The locations of the people on the diagram below therefore represent farm locations. However, because I am not modelling water availability and dwelling locations, the spatial distribution of people should not be taken too seriously – I am only hoping to accurately model population numbers.

Anasazi society was matriarchal (households were headed by women) and matrilocal (daughters lived near their mothers). Since archaeology (counting and dating house ruins) gives us household numbers (rather than people numbers), the model operates at a household level. However, food consumption calculations are based on five people per household. Young women are assumed to marry at age 16, so that the possible history of a household is as follows:

Household age Matriarch age First possible daughter Last possible daughter
0 (begins) 16 – –
1 17 0 (born) –
17 33 16 (marries) –
18 34 – –
19 35 – –
20 36 – 0 (born)
36 52 – 16 (marries)
37 53 – –
38 54 – –

The earliest age that a household can reproduce (spin off a daughter household) is 17 (at which time the mother is aged 33). Fertility is assumed to continue to age 36 (based on the discussion in Axtell et al.), which means that the latest possible daughter household is produced at a household age of 36. Matriarchs are assumed to die after age 54, so that the oldest possible household is 38 years (which is perhaps a little old for a society of this kind).

The Anasazi were able to store corn (according to Axtell et al., for 2 years), which gave them the ability to ride out short droughts. Daughter households were gifted some of their mother’s corn (about one third), to get them started in life. This gives us the following constants for the model:

to setup
  ...
  set start-year           800        ;; initial year
  set food-requirement     (5 * 160)  ;; a person needs 160 kg of corn each year, while a typical household consist of 5 persons
  set fertility-age        17         ;; the minimum age of agents (households) that can reproduce (spin off daughter households)
  set fertility-end-age    36         ;; the maximum age of agents (households) that can reproduce
  set death-age            38         ;; the age after which agents (households) die (dissolve when the matriarch dies)
  set corn-gift-ratio      0.33       ;; each new household gets this fraction of the corn storage of the parent household
  set corn-stock-years     2          ;; corn can be stored for this many years
  ...
end

Households must make certain decisions. Initially, and in times of famine, they must choose a (new) farm. We introduce a reporter (function) to return an estimate of the productivity of a farm (patch). This reporter will be used twice, so it is good practice to give the calculation its own name. The result of the reporter is in fact just be the base yield of a patch, but we could expand it to incorporate historical information and/or decision errors. Giving the calculation its own name makes it easier to include such extensions down the track.

to-report estimated-farm-yield
  report base-yield
end

We use this estimate to generate an agent set of potential farms – those that produce enough food for a household and are not being farmed by anybody else:

to-report potential-farms
  report (patches with [ estimated-farm-yield >= food-requirement and not being-farmed? ])
end

The following reporter picks out the best farm from an agent set. If there are no suitable farms, the household is assumed to leave the valley, which is equivalent (from a model point of view) to dying. But what is the “best farm”? The matrilocal structure means that agents try not to move very far (i.e. they select a low distance from their existing location), but we also introduce a slight bias towards a high estimated farm yield (minimising 1000 / estimated-farm-yield). The NetLogo built in min-one-of operator does the hard work of finding the patch with the lowest combined score. Incidentally, for distances to be calculated correctly, the NetLogo world must be set to not “wrap” either horizontally or vertically.

to find-best-farm [ farms ]
  ifelse (count farms = 0)
    [ die ]
    [ let existing-farm patch-here
      let best-farm (min-one-of farms [ distance existing-farm + 1000 / estimated-farm-yield ])
      ask best-farm [ set being-farmed? true ]
      move-to best-farm ]
end

According to the archaeological record, the population of the valley began with 14 households. We can initialise them with the code below (which assumes that households are what NetLogo calls a “breed”). We use a utility procedure initialise-household to set various attributes. This procedure takes a starting age as a parameter. Starting ages run from 0 to 28, but we will use the same procedure later for new-born households, which will always start with an age of zero. We also give each household an initial list of corn storage stocks, divided into ages of 2, 1, and 0 years. The oldest corn (at the front of the list) will be eaten first, and corn older than 2 years will be thrown away. The NetLogo built in n-values operator generates a list of the required length, calling random-float three times.

to setup
  ...
  set-default-shape households "person"
	
  calculate-patch-yields  ;; needed to set up decision-making
  create-households 14 [
    initialise-household (random 29)
    set corn-storage (n-values (corn-stock-years + 1) [ 600 + random-float 400 ])
    find-best-farm potential-farms
  ]
  ...
end

to initialise-household [ start-age ]
  set age start-age
  set harvest 0
  set size 4
  set color black
end

Closely related code allows a household to reproduce. For efficiency reasons, we pass the set of potential farms as a parameter. The NetLogo built in hatch operator allows an agent to create other agents. We also use the NetLogo built in map operator twice – once to generate a list of gift corn, with each entry being one third of the parent household’s list; and once to subtract the entries in the gift-corn list from the corresponding parent household’s list (as a general rule, if a list is being transformed to another list of the same length, the map operator usually provides the most elegant NetLogo solution).

to do-reproduce [ farms ]
  let gift-corn (map [ ? * corn-gift-ratio ] corn-storage)
  set corn-storage (map [ ?1 - ?2 ] corn-storage gift-corn)
  
  hatch 1 [ 
    initialise-household 0
    set corn-storage gift-corn
    find-best-farm farms
  ]
end

We can now provide the code for a simulation step, although annual-household-activities and plot-interesting-data still need to be defined. There are two ask loops. In the first loop, agents set off in search of a new farm if they predict that their corn stocks (plus their expected future harvest) will not meet their annual food requirements (plus a 10% margin). In the second loop, households of fertile age reproduce with a probability given by the fertility slider. A local variable stores the result of the potential-farms calculation, to avoid recomputing it unnecessarily.

to go
  calculate-patch-yields
  annual-household-activities
    
  ask households [  ;; agents with insufficient food for next year try to find a new farm
    if (food-estimate < 1.1 * food-requirement) [
      ask patch-here [ set being-farmed? false ]
      find-best-farm potential-farms
    ]
  ]
  
  let farms potential-farms  ;; avoid recomputing this unnecessarily
  ask households [
    if (age >= fertility-age and age <= fertility-end-age and random-float 1 < fertility and count farms > 0) [ 
      do-reproduce farms
      set farms potential-farms
    ]
  ]
  
  plot-interesting-data
  if (year = 1350 or count households = 0) [ stop ]
  set year year + 1
  tick
end

to-report food-estimate  ;; estimate the amount of food available for next year, based on current stocks of corn, and an estimate of the future harvest
  let future-estimate harvest  ;; predict next year's harvest to be the same as this year's harvest
  report (future-estimate + sum (but-first corn-storage))  ;; ignore the corn on the front of the list, which is now too old to eat
end

The annual-household-activities procedure increments the age of households, has them eat their annual food quota, and kills them off if they are too old or have insufficient food. Points to note include:

  1. We set up a global variable harvest-list to be a list of this year’s harvests (this will later be plotted as a histogram).
  2. We supplement the spatial variability in farm quality with a temporal variability in annual harvests, re-using our apply-variability reporter.
  3. We maintain the household’s 3-element list of corn by dropping the oldest corn at the front, and adding the new harvest to the back.
  4. We introduce a household attribute unsatisfied-hunger to process the food that the household needs.
  5. We use a NetLogo trick to eat corn starting from the front of the list, using the eat-corn reporter (below).
to annual-household-activities
  set harvest-list [ ]  ;; a list of this year's harvests (1)
  ask households [
    set age age + 1
    if (age > death-age) [
      ask patch-here [ set being-farmed? false ]
      die
    ]

    set harvest (apply-variability harvest-variability) * ([ base-yield ] of patch-here)  ;; this year's harvest (2)
    set harvest-list (fput harvest harvest-list)  ;; (1)
    
    set corn-storage (lput harvest (but-first corn-storage))  ;; oldest corn is at the front of the list (3)
    set unsatisfied-hunger food-requirement  ;; a household attribute (4)
    set corn-storage (map [ eat-corn ? ] corn-storage)  ;; eat corn starting from the front of the list (5)

    if (unsatisfied-hunger > 0) [
      ask patch-here [ set being-farmed? false ]
      die
    ]
  ]
end

to setup
  ...
  set harvest-list [ ]
  ...
end

In transforming a list to another of the same length, using map is usually a good idea. Here it processes the list from the front, applying eat-corn to each element. The unsatisfied-hunger attribute links together the different calls to eat-corn, so that they either (a) subtract the unsatisfied food requirement from a list element, or (b) consume all of a list element and update unsatisfied-hunger for the next iteration. The eat-corn reporter thus has two simple cases:

to-report eat-corn [ existing-amount ]
  ifelse (existing-amount >= unsatisfied-hunger)
    [ let remaining-amount (existing-amount - unsatisfied-hunger)
      set unsatisfied-hunger 0
      report remaining-amount ]
    [ set unsatisfied-hunger (unsatisfied-hunger - existing-amount)
      report 0 ]
end

A diagram showing three example calls to eat-corn may make this clearer. It can be seen that the oldest corn at the front of the list is being eaten first:

The following code handles plotting and statistics. We use plotxy so that our x-coordinates can start at 800. We plot histograms of household ages and our previously calculated harvest-list (we don’t use the obvious [ harvest ] of households because that would exclude households that starved to death, and would also include new daughter households that have no harvest yet). Unfortunately, histograms in NetLogo require explicit control of the x axis, which adds some complexity. We also maintain a list of population levels in population-sequence, and calculate how well this fits the historical sequence (excluding the proportion of the historical sequence where the population is zero). We measure fitness using the root-mean-square of differences between the two sequences (the aquare root of the mean of the squares of the differences). This is equivalent to using what the literature calls the L2 norm (which penalises large errors), but is expressed in units of numbers of households, which is easier to understand. Values of the root-mean-square error tend to be around 30, which means that “typically” the model differs from the archaeological reality by about 30 households.

to plot-interesting-data
  set-current-plot "Population (households)"
  set-current-plot-pen "Potential"
  plotxy year (count patches with [ mean historical-base-yield >= food-requirement ])
  set-current-plot-pen "Historical"
  plotxy year (item (year - 800) historical-sequence)  ;; index list starting at 0
  set-current-plot-pen "Model"
  plotxy year (count households)
  
  let age-list ([ age ] of households)
  
  set-current-plot "Household ages"
  let xmax (1 + max (fput 0 age-list))
  set-histogram-num-bars xmax
  if (plot-x-max < xmax) [ set-plot-x-range 0 xmax ]  ;; histograms require explicit x-max control
  histogram age-list
  
  set-current-plot "Household harvests"
  let block-size 100
  set xmax (1 + ceiling (max (fput 0 harvest-list) / block-size))
  if (plot-x-max < xmax * block-size) [ set-plot-x-range 0 (xmax * block-size) ]
  set-histogram-num-bars (plot-x-max / block-size)
  histogram harvest-list

  if (length population-sequence < desired-fit-length) [ set population-sequence (lput (count households) population-sequence) ]
  let cut-historical-sequence (n-values (length population-sequence) [ item ? historical-sequence ])
  set fit-badness (sqrt (mean (map [ (?1 - ?2) * (?1 - ?2) ] population-sequence cut-historical-sequence)))
end

to setup
  ...
  set population-sequence [ ]
  set desired-fit-length (length filter [ ? > 0 ] historical-sequence)
  ...
end

A typical plot is shown below (simulated population levels are in red). The root-mean-square error here is 25.963, which is quite good. Our decision to introduce the usable-farm-fraction slider is partially responsible for this, since it limits excessively high populations without unduly restricting low populations.

The entire setup procedure has now been described. For completeness, here it is in its entirety, together with the declarations at the top of the program. The full NetLogo program is at Modeling Commons.

extensions [ gis table ]

breed [ households household ]

globals [ start-year fertility-age fertility-end-age death-age food-requirement
          food-estimate-margin corn-gift-ratio corn-stock-years good-farm-bias
          year pdsi-table population-sequence historical-sequence harvest-list fit-badness desired-fit-length ]

patches-own [ value zone patch-quality base-yield historical-base-yield being-farmed? ]

households-own [ harvest age corn-storage unsatisfied-hunger ]

to setup
  clear-all  ;; as previously described
  set-default-shape households "person"
  gis-map-load "adata/Map.asc"
  set pdsi-table load-zone-file "adata/ZoneAdjPDSI.txt"
  ask patches [ initialise-patch ]

 set start-year           800         ;; initial year
  set food-requirement     (5 * 160)  ;; a person needs 160 kg of corn each year, while a typical household consist of 5 persons
  set fertility-age        17         ;; the minimum age of agents (households) that can reproduce (spin off daughter households)
  set fertility-end-age    36         ;; the maximum age of agents (households) that can reproduce
  set death-age            38         ;; the age after which agents (households) die (dissolve when the matriarch dies)
  set corn-gift-ratio      0.33       ;; each new household gets this fraction of the corn storage of the parent household
  set corn-stock-years     2          ;; corn can be stored for this many years
  set year start-year
  set population-sequence [ ]
  set harvest-list [ ]
  
  set historical-sequence
    [ 14 14 14 14 14 14 14 14 14 14 13 13 13 12 12 12 12 12 12 12 11 11 11 11 11 10 10 10 10 10 9 9 9 9 9 9 9 9 8 8 7 7 7 7 7 7 7 7 7 7
      28 29 29 29 29 29 29 29 29 29 29 30 30 31 31 31 31 32 32 32 32 33 33 33 33 33 37 37 37 37 37 39 39 39 40 40 40 40 41 41 41 42 42
      42 42 42 42 42 42 42 56 58 58 58 58 58 58 58 58 58 58 60 60 60 60 60 60 61 61 61 60 61 61 61 61 59 61 61 61 61 61 62 62 62 62 62
      62 62 62 62 61 63 63 63 63 63 63 63 63 63 66 67 67 67 67 67 67 67 67 67 66 67 67 67 67 67 67 66 66 66 66 69 69 69 69 65 68 68 68
      68 66 67 67 67 66 66 66 66 66 66 66 68 68 68 68 68 68 68 68 68 87 87 87 87 87 87 87 86 86 87 85 86 86 87 87 87 87 88 88 88 87 87
      87 87 87 88 90 90 90 90 89 91 92 92 95 95 95 95 97 97 94 94 94 94 95 95 95 95 95 95 134 139 139 139 139 139 139 142 142 142 142
      143 143 146 146 146 146 151 151 153 151 151 151 151 151 156 164 164 164 164 163 163 163 163 165 165 165 165 164 164 163 164 164
      164 161 162 162 162 162 162 161 164 164 165 165 166 166 166 167 166 166 166 166 159 159 160 160 158 159 160 160 161 162 162 162
      148 150 151 151 151 151 151 149 149 147 148 148 148 148 150 150 150 151 151 152 152 152 153 153 153 116 117 118 118 119 119 120
      121 122 126 124 124 124 126 127 127 129 131 131 133 132 134 134 136 137 133 138 138 139 139 138 140 140 142 142 142 143 143 144
      145 145 147 146 147 147 149 149 150 151 151 145 146 146 147 148 149 149 151 153 154 155 157 159 160 161 163 164 166 167 167 169
      169 171 171 173 170 173 176 176 178 178 180 182 184 184 185 188 189 190 192 191 193 192 194 194 194 195 196 199 200 192 193 194
      196 196 197 200 202 202 204 201 209 208 211 212 212 213 214 215 216 210 208 206 201 196 188 181 176 172 167 159 156 148 146 141
      120 118 114 106 103 95 90 88 83 74 70 68 60 58 56 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 ]
  
  make-region-label 45 98  "Non-arable uplands" black
  make-region-label 28 47  "General valley" brown
  make-region-label 23 73  "North valley" red
  make-region-label 50 68  "Mid valley" gray
  make-region-label 25 84  "Dunes" green
  make-region-label 20 102 "Kinbiko Canyon" pink
  make-region-label 39 11  "Arable uplands" orange
 
  set desired-fit-length (length filter [ ? > 0 ] historical-sequence)

  calculate-patch-yields
  create-households 14 [
    initialise-household (random 29)
    set corn-storage (n-values (corn-stock-years + 1) [ 600 + random-float 400 ])
    find-best-farm potential-farms
  ]

  reset-ticks
  tick-advance start-year
end

The interface for the model looks like this:

Most of the parameters for this model have been given plausible values determined by the literature. The fertility and usable-farm-fraction sliders, however, are adjusted to maximise the quality of fit between the model and archaeological reality, and we need to take a closer look at these. The diagram below shows root-mean-square errors for various parameter combinations, averaged over ten runs each. A good value for usable-farm-fraction is 0.21. A fertility value of 0.9 generally works well, since it gives a realistically slow population rise, but occasionally it can lead to the population dying out. A value of 0.1 performs almost as well, and is somewhat safer. It is also not far off the value of 0.125 suggested by Axtell et al.

The graph below shows 100 runs of the model (red), compared to the historical data (blue). It can be seen that, in one of the runs, the population dies out early. The other 99 runs follow the historical record reasonably well, however.

This model begs for extension, of course. Not just by restoring the water availability and dwelling location factors I have excluded – the quality of human decision-making could also be improved. The difficulty is that land-use models generally rely on surveying and interviewing farmers in order to reveal their decision processes. The Anasazi are no longer available for such investigation. However, even the simplified model we have presented here has succeeded in replicating the population variation in Long House valley over time. The exception, as Axtell et al. point out, is the final 50-year period. Archaeology shows that the valley was deserted around 1300, but in fact a reasonable population could have survived. Some other factors must have been in operation to cause the mass exodus that occurred. Plausible guesses about those factors could also be included in the model.

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged Anasazi, Archaeology, Arizona, GIS, NetLogo, Simulation, Tutorials | 1 Reply

Revisiting Artificial Anasazi: a NetLogo tutorial (part 1)

Posted on June 7, 2014 by Tony
4

I have previously uploaded some NetLogo tutorials, and for the next one I thought I would revisit the famous “Artificial Anasazi” model of Axtell et al. (2002). This model was ported to NetLogo by Marco Janssen in 2008 (see his 2009 JASSS paper and his model). I have rewritten the model from scratch for explanatory purposes (though very much inspired by Janssen’s work). It makes a good example of how NetLogo can assist land-use studies, both archaeological and present-day.

The model studies the population of Anasazi farmers living in Long house Valley (near Black Mesa in Arizona) between 800 and 1350 AD. The map below shows part of the valley, through which Highway 160 now runs (there are more maps here):

Much has been preserved by the dry Arizona climate. Fragments of pottery (like the bowl at the top of the page) are found routinely, and these well-preserved ruined Anasazi dwellings below are not far away. Extensive archaeological investigation has told us both the climate that existed between 800 and 1350 AD, and the population of the valley during that time.

For purposes of simplicity, my version of the model does not include the availability of drinking water and the locations of dwellings. I am only modelling the location of farms. Consequently, although I hope to accurately model population numbers, I do not hope to accurately model the locations at which people lived, as Axtell et al. did. In addition, this first half of the tutorial will only look at code for the patches in the model.

The model requires three data files in a folder called “adata” (zipped version here). One of these is a map of the valley, formatted as an ascii GIS raster file, which can be read by the NetLogo GIS extension. Such use of GIS datasets is common in land-use models. The following command will read in the raster file, and set a numerical value attribute of each patch.

to gis-map-load [ fname ]
  ifelse (file-exists? fname)
    [ let dataset gis:load-dataset fname
      gis:set-world-envelope (gis:envelope-of dataset)
      gis:apply-raster dataset value
    ]
    [ file-error fname ]
end

to file-error [ fname ]
  user-message (word "Cannot find the file \"" fname "\"")
end

These numerical value attributes encode different regions of the valley. The following two reporters (functions) transform numerical codes to names (which will be stored in a zone attribute), and transform names to colours (which will be stored in the standard patch colour attribute):

to-report patch-zone-name [ i ]
  ifelse (i = 0) [ report "General" ] [      ;; General valley floor
  ifelse (i = 10) [ report "North" ] [       ;; North valley floor
  ifelse (i = 15) [ report "North Dunes" ] [ ;; North valley dunes
  ifelse (i = 20) [ report "Mid" ] [         ;; Mid valley floor
  ifelse (i = 25) [ report "Mid Dunes" ] [   ;; Mid valley dunes
  ifelse (i = 30) [ report "Natural" ] [     ;; Natural (non-arable)
  ifelse (i = 40) [ report "Uplands" ] [     ;; Uplands (arable)
  ifelse (i = 50) [ report "Kinbiko" ] [     ;; Kinbiko Canyon
  ifelse (i = 60) [ report "Empty" ] [ 
  report "?" ] ] ] ] ] ] ] ] ]               ;; there should be no "?" patches
end

to-report patch-zone-color [ z ]
  ifelse (z = "General") [ report brown ] [
  ifelse (z = "North") [ report red ] [
  ifelse (z = "Mid") [ report gray ] [
  ifelse (z = "Natural") [ report yellow ] [
  ifelse (z = "Uplands") [ report orange ] [
  ifelse (z = "Kinbiko") [ report pink ] [ 
  ifelse (z = "North Dunes") [ report green ] [ 
  ifelse (z = "Mid Dunes") [ report green ] [ 
  ifelse (z = "Empty") [ report white ] [ 
  report magenta ] ] ] ] ] ] ] ] ]  ;; there should be no magenta patches
end

I have used a formatting style here which is suitable for highly nested ifelse structures in NetLogo (the NetLogo editor helps ensure the correct number of right brackets). The following utility code adds a label to a patch, and loading the GIS file followed by some labelling produces the map below:

to make-region-label [ x y txt clr ]
  ask patch x y [
    set plabel-color clr
    set plabel txt
  ]
end

to setup  ;; this code will be updated later in the tutorial
  clear-all
  gis-map-load "adata/Map.asc"
  ask patches [
    set zone (patch-zone-name value)
    if (zone = "?") [ user-message (word "Error in patch data load for " self) ]
    set pcolor (patch-zone-color zone)
  ]
  
  make-region-label 45 98  "Non-arable uplands" black
  make-region-label 28 47  "General valley" brown
  make-region-label 23 73  "North valley" red
  make-region-label 50 68  "Mid valley" gray
  make-region-label 25 84  "Dunes" green
  make-region-label 20 102 "Kinbiko Canyon" pink
  make-region-label 39 11  "Arable uplands" orange
  ...
end

Here is the colour-coded NetLogo map of the valley:

The colour-coding is important, because different parts of the valley have had different fertility levels over time. Fortunately, we have a good idea what those levels were. One of the data files contains estimates of the Palmer Drought Severity Index (PDSI) for each patch, over the 800–1350 period of the simulation. This file is formatted as a sequence of lists, one for each different zone of the valley, with the first element of each list being the zone name, and the rest of the list being PDSI numbers (it’s handy that NetLogo can read in an entire list in one gulp). The following code transforms the data file into a table called pdsi-table (obviously this also requires the NetLogo “table” extension).

to-report load-zone-file [ fname ]
  let tbl table:make
  ifelse (file-exists? fname)
    [ file-open fname
      while [ not file-at-end? ] [
        let lst file-read
        table:put tbl (item 0 lst) (but-first lst)  ;; split first item (zone name) from rest of list
      ]
      file-close
    ]
    [ file-error fname ]
  report tbl
end

to setup
  ...
  set pdsi-table load-zone-file "adata/ZoneAdjPDSI.txt"
  set start-year 800
  set year start-year
  ...
end

The data file used here was extracted from that used in the original model. Having read it, the following simple reporter (function) can then look up the PDSI value for a particular zone and year:

to-report get-pdsi [ z y ]
  ifelse (table:has-key? pdsi-table z)
    [ report item (y - start-year) (table:get pdsi-table z) ]
    [ report 0 ]
end

For each zone, the PDSI determines an estimated crop yield for the farmers. Although the Anasazi grew a range of crops (corn, beans, and squash – see coin above), the model expresses crop yields as if only corn was grown. In the original model, the relationship between PDSI and crop yield is expressed as a step function, but I have smoothed this out using a cubic polynomial interpolation (the red curve below, rather than the dashed step function):

The following code calculates the crop yields for each patch (in kilograms per hectare), using these cubic interpolations:

to-report thresholded-cubic [ x a b c d lo hi ]
  let res (x * x * x * a + x * x * b + x * c + d)
  if (res < lo) [ set res lo ]
  if (res > hi) [ set res hi]
  report res
end

to calculate-patch-yields  ;; calculate the crop yield for each patch based on the PDSI data, using a cubic interpolation of published numbers
  ask patches [
    let pdsi get-pdsi zone year
    let theoretical-yield 0

    if (zone = "North" or zone = "Kinbiko" or (zone = "Mid" and pxcor <= 34)) [
      set theoretical-yield (thresholded-cubic pdsi 4.4167 6.9456 49.583 823.48 617 1153)
    ]
    if (zone = "General" or (zone = "Mid" and pxcor > 34)) [
      set theoretical-yield (thresholded-cubic pdsi 3.65 5.7925 41.65 686.28 514 961)
    ]
    if (zone = "Uplands") [
      set theoretical-yield (thresholded-cubic pdsi 2.9333 4.6599 33.267 548.77 411 769)
    ]
    if (zone = "North Dunes" or zone = "Mid Dunes") [
      set theoretical-yield (thresholded-cubic pdsi 4.5833 7.1871 51.917 858.03 642 1201)
    ]

    set base-yield (theoretical-yield * ...)  ;; missing code to be supplied later
    ...
  ]
end

The base-yield attribute of each patch is based on the theoretical yield calculated in this way, but it is modified in two ways, using the lower two of the following sliders:

First, we introduce some spatial variability in crop yields, using a normally distributed patch-quality attribute:

to-report apply-variability [ var ]
  let res random-normal 1 var  ;; random variation around a mean of 1
  if (res < 0) [ set res 0 ]   ;; because random-normal can return negative values
  report res
end

to setup
  ...
  ask patches [
    set patch-quality (apply-variability crop-yield-variability)
  ]
  ...
end

Because the normal distribution (the “bell curve”) has infinitely long tails, the calls to random-normal can sometimes return negative or very large crop yield values. Excessively large values have little effect on the simulation, because the model incorporates a rule that stored corn more than two years old goes bad and is thrown away. However, negative values must be explicitly tested for (negative crop yields are likely to give some rather strange behaviour!).

The second adjustment is that the entire crop yield is discounted using the harvest-adjustment slider:

to calculate-patch-yields
  ask patches [
    ...
    set base-yield (theoretical-yield * patch-quality * harvest-adjustment) 
    ...
  ]
end

In the models of Axtell et al. and Janssen, low values of harvest-adjustment are used to avoid unrealistically high population levels. The histogram below shows a typical run, for the year 1200 AD (a good year) on the General valley floor. Here, the theoretical yield of 961 is discounted to a mean of 560 kg (dashed line), well below the 800 kg (dotted line) required to support a typical five-person household for a year:

The effect of the harvest-adjustment is thus to restrict the population of the General valley floor to 53 good farms (8.3% of the patches), but also to impose a rather unrealistic triangular crop-yield distribution on the patches being farmed (i.e. the right-hand tail of the “bell curve”). It seems to me better to explicitly provide a usable-farm-fraction slider that restricts farming to a subset of patches. This permits a more realistic harvest-adjustment of 0.9, corresponding to “wastage” of only 10% of the crop.

It turns out that 0.21 is a good value for usable-farm-fraction. But why would 79% of the patches be unavailable for farming? One reason is the factors I have excluded in this tutorial version of the model – water supply and housing. Some potential farms are just too far away from suitable housing sites. The Anasazi penchant for cliff dwellings also suggests that potential housing sites had to be defensible, and this also rules out some potential farms as being too far away from suitable housing. Additionally, some patches would be ruled out by various terrain factors, and there would also be social pressures that limited the population level. Furthermore, the patches in the model are a little less than a hectare in size, and so one-hectare farms will occupy a little more than one patch. The usable-farm-fraction slider compensates for all of these effects.

We package the various patch initialisation activities together as follows. This includes using usable-farm-fraction to select a set of patches which will have non-zero patch-quality (as usual, we obtain a certain probability by testing for random-float 1 being less than that probability). The code also includes a historical-base-yield attribute, which will hold a list of the three most recent base yields for a patch:

to initialise-patch
  set zone (patch-zone-name value)  ;; as described above
  if (zone = "?") [ user-message (word "Error in patch data load for " self) ]
  set pcolor (patch-zone-color zone)
  set being-farmed? false
  set historical-base-yield [ ]  ;; this will be expanded to a list of the last 3 yields
  
  ifelse (random-float 1 < usable-farm-fraction)
    [ set patch-quality (apply-variability crop-yield-variability) ]
    [ set patch-quality 0 ]
end

We call this initialisation in setup, and we update the historical-base-yield list in the calculate-patch-yields procedure. We can also average the list to smooth out temporal variation in crop yields, compensating for the corn-storage ability of the Anasazi. The code count patches with [ mean historical-base-yield >= 800 ] will count feasible farms, taking corn storage into account.

to setup
  ...
  ask patches [
    initialise-patch
  ]
  ...
end

to calculate-patch-yields
  ask patches [
    ...  ;; calculate theoretical-yield as described above
    set base-yield (theoretical-yield * patch-quality * harvest-adjustment)
    set historical-base-yield (fput base-yield historical-base-yield)  ;; put new base-yield on the front of the list
    if (length historical-base-yield > 3) [ set historical-base-yield (but-last historical-base-yield) ]  ;; truncate to 3 entries, if necessary
  ]
end

The main additional code in setup is creation of a list of historical population levels:

to setup
  clear-all  ;; as previously described
  gis-map-load "adata/Map.asc"
  set pdsi-table load-zone-file "adata/ZoneAdjPDSI.txt"
  ask patches [ initialise-patch ]
  set start-year 800
  set year start-year

  set historical-sequence
    [ 14 14 14 14 14 14 14 14 14 14 13 13 13 12 12 12 12 12 12 12 11 11 11 11 11 10 10 10 10 10 9 9 9 9 9 9 9 9 8 8 7 7 7 7 7 7 7 7 7 7
      28 29 29 29 29 29 29 29 29 29 29 30 30 31 31 31 31 32 32 32 32 33 33 33 33 33 37 37 37 37 37 39 39 39 40 40 40 40 41 41 41 42 42
      42 42 42 42 42 42 42 56 58 58 58 58 58 58 58 58 58 58 60 60 60 60 60 60 61 61 61 60 61 61 61 61 59 61 61 61 61 61 62 62 62 62 62
      62 62 62 62 61 63 63 63 63 63 63 63 63 63 66 67 67 67 67 67 67 67 67 67 66 67 67 67 67 67 67 66 66 66 66 69 69 69 69 65 68 68 68
      68 66 67 67 67 66 66 66 66 66 66 66 68 68 68 68 68 68 68 68 68 87 87 87 87 87 87 87 86 86 87 85 86 86 87 87 87 87 88 88 88 87 87
      87 87 87 88 90 90 90 90 89 91 92 92 95 95 95 95 97 97 94 94 94 94 95 95 95 95 95 95 134 139 139 139 139 139 139 142 142 142 142
      143 143 146 146 146 146 151 151 153 151 151 151 151 151 156 164 164 164 164 163 163 163 163 165 165 165 165 164 164 163 164 164
      164 161 162 162 162 162 162 161 164 164 165 165 166 166 166 167 166 166 166 166 159 159 160 160 158 159 160 160 161 162 162 162
      148 150 151 151 151 151 151 149 149 147 148 148 148 148 150 150 150 151 151 152 152 152 153 153 153 116 117 118 118 119 119 120
      121 122 126 124 124 124 126 127 127 129 131 131 133 132 134 134 136 137 133 138 138 139 139 138 140 140 142 142 142 143 143 144
      145 145 147 146 147 147 149 149 150 151 151 145 146 146 147 148 149 149 151 153 154 155 157 159 160 161 163 164 166 167 167 169
      169 171 171 173 170 173 176 176 178 178 180 182 184 184 185 188 189 190 192 191 193 192 194 194 194 195 196 199 200 192 193 194
      196 196 197 200 202 202 204 201 209 208 211 212 212 213 214 215 216 210 208 206 201 196 188 181 176 172 167 159 156 148 146 141
      120 118 114 106 103 95 90 88 83 74 70 68 60 58 56 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 ]
  
  make-region-label 45 98  "Non-arable uplands" black  ;; as previously described
  make-region-label 28 47  "General valley" brown
  make-region-label 23 73  "North valley" red
  make-region-label 50 68  "Mid valley" gray
  make-region-label 25 84  "Dunes" green
  make-region-label 20 102 "Kinbiko Canyon" pink
  make-region-label 39 11  "Arable uplands" orange
 
  reset-ticks
  tick-advance start-year  ;; start counting from the year 800
end

The code for a step just plots crop yields (smoothed, using the historical-base-yield list) and calculates new crop yields:

to go  ;; code for one step
  calculate-patch-yields
  plot-interesting-data
  if (year = 1350) [ stop ]
  set year year + 1
  tick
end

to plot-interesting-data  ;; plotting code
  set-current-plot "Population (households)"
  set-current-plot-pen "Potential"
  plotxy year (count patches with [ mean historical-base-yield >= 800 ])
  set-current-plot-pen "Historical"
  plotxy year (item (year - 800) historical-sequence)  ;; index list starting at 0
end

Plotting (see graph below) shows a good fit between predicted carrying capacity of the valley and actual population levels. Even better results are obtained when actual households are modelled, as we will see in part 2 of this tutorial.

The keys to defining this NetLogo model, so far, have been loading of a GIS map and various archaeologically established datasets, together number-crunching to calculate crop yields. When households are modelled, however, some code for human decision-making will also be needed. The model also illustrates how much functionality can be achieved with even a small NetLogo program!

See part 2 of this tutorial for the human side of the model. The full NetLogo program is at Modeling Commons.

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged Anasazi, Archaeology, Arizona, GIS, NetLogo, Simulation, Tutorials | 4 Replies

The NetLogo GIS extension: a tutorial

Posted on May 12, 2014 by Tony
21

I have previously written about the benefits of extensions to NetLogo. One of the powerful built-in extensions to NetLogo is the Geographic Information Systems (GIS) extension, which allows the import of vector and raster GIS data into NetLogo simulation models. This is particularly helpful in building simulations of real populations (as well as allowing the import of such things as terrain data).

In this tutorial, we discuss a modified version of Uri Wilensky’s “GIS General Examples” model, which is bundled with NetLogo in the “Code Examples” section of the Models Library. This tutorial example requires a copy of the data folder associated with that model (which can be found in the “Models” folder of the NetLogo installation directory). In particular, this tutorial example imports country shape data and (slightly old) country population data, so as to be able to compute population densities, like so:

The data behind this includes the following table:

CNTRY_NAME POP_CNTRY SQKM …
Afghanistan 17250390 641358.44 …
Albania 3416945 28798 …
Algeria 27459230 2323510.25 …
… … … …

As well as colouring countries by population density, we will be able to construct representative models of the world population, where each NetLogo agent represents 10 million people in the real world, and these agents are located in the appropriate country on a world map. For example, Australia’s population of roughly 20 million people is represented by 2 agents, while Canada’s population of roughly 30 million people is represented by 3 agents. India and China are a little more crowded:

Constructing agent populations in this way allows agents to have country-dependent behaviour. The same approach, with different GIS datasets, allows models of individual countries, states, or cities, with populations determined by GIS data on, for example, counties or suburbs.

The NetLogo code

Loading the GIS data is much like Uri Wilensky’s “GIS General Examples” model (specific details will be given later), while the following procedure sets up the population density data in the patches:

to setup-gis
  show "Loading patches..."
  gis:apply-coverage countries-dataset "POP_CNTRY" population
  gis:apply-coverage countries-dataset "SQKM" area
  gis:apply-coverage countries-dataset "CNTRY_NAME" country-name

  ask patches [
    ifelse (area > 0 and population > 0)
      [ ; Colour patch according to population density
        set population-density (population / area)
        set pcolor (scale-color red population-density 400 0) ]
    
      [ ; Colour patch with no population blue
        set population-density 0
        set pcolor blue ]
  ]
end

Here the calls to gis:apply-coverage copy values from the GIS country dataset to the named patch variables. The ask which follows calculates the population density of each patch, and colours it appropriately.

How should we create the agents? The simplest answer is to use those calculated population densities, combined with the fact that the average patch has an area of 510,000,000 / N square kilometres, where N is the number of patches (ignoring the substantial distortion due to the map projection). To take India as an example, the population density is 283.7 people per square kilometre, which works out to 13,671,000 people per patch, which will be represented by 1.3671 agents (this is in fact a slight overestimate, since the sum total of India’s patches have a slightly greater area than the country itself). We can effectively create 1.3671 agents per patch by always creating one agent, and creating an additional one with probability 0.3671. The sprout-<breeds> primitive is used to create the agents:

to create-random
  ; Delete old population
  ask persons [ die ]

  ask patches [
    if (population-density > 0) [
      ; How many people to create? (N.B. this is an overestimate, since country does not occupy entire patch)
      let num-people (population-density * patch-area-sqkm / 10000000)
      
      ; Create whole numbers directly
      sprout-persons (floor num-people) [ agent-setup country-name ]
      
      ; Create fractions probabilistically
      let fractional-part (num-people - (floor num-people))
      if (fractional-part > random-float 1) [ sprout-persons 1 [ agent-setup country-name ] ]
    ]
  ]
  
  show (word "Randomly created " (count persons) " people")
  tick
end

to agent-setup [ ctry ] ; set up agent parameters, including the country of the agent
  set shape "person"
  set size 3
  set color black
  set agent-country-name ctry
end

Results

This code gives a different result each time, with between 540 and 640 agents created. The black line on the histogram below shows the average value of 584, while the dotted line shows the actual world population (some years ago) of 5.695 billion (corresponding to 570 agents). Notice that over-estimating the number of agents per grid square has given a population which is too large. This is in spite of the fact that populations of countries occupying zero grid squares don’t get counted at all. Examples of such small countries include Singapore (population 2.8 million), Rwanda (7.9 million), and Burundi (6.0 million). The other difficulty is that, because of the substantial distortion due to the map projection, there are not enough agents in countries near the equator (like Brazil), and too many in countries near the poles (like Russia). The dashed blue line shows the result of a better (and deterministic) method, which we discuss next.

A better way

For the deterministic alternative, we also use the NetLogo tables extension, which provides tables of key-value mappings (with put, get, and has-key?). We use two tables: patches-to-go, which maps a country name to a number of patches, and people-to-go, which maps a country name to a number of agents. The second of these tables is created using a loop over GIS attributes (3):

to create-deterministic
  ; Reset tables (1)
  set patches-to-go table:make
  set people-to-go table:make
  set desired-people 0
  
  ; Delete old population (2)
  ask persons [ die ]
  
  ; Calculate number of people desired (3)
  ; N.B. Countries with no patches will be ignored, even if population > 0
  foreach gis:feature-list-of countries-dataset [
    let ctry gis:property-value ? "CNTRY_NAME"
    let pop gis:property-value ? "POP_CNTRY"
    let desired (round (pop / 10000000))
    table:put people-to-go ctry desired
  ]

This sets up the people-to-go table to hold the number of agents that must be created for each country (the question marks refer to the “vector feature” objects that the loop is iterating over). The number of patches for each country, on the other hand, is trickier. It is tempting to use multiple calls to count patches with ..., but that would be slow. The following alternative iterates once over all patches, adding one to the appropriate table entry in patches-to-go each time (and also accumulating the total desired-people to create, excluding countries with zero patches):

  ; Count patches for each country, efficiently, using a single loop (4)
  ask patches [
    if (population-density > 0) [
      
      ; Have we seen this country before?
      ifelse (table:has-key? patches-to-go country-name)
        [ ; YES -- record one extra patch for this country
          let n (table:get patches-to-go country-name)
          table:put patches-to-go country-name (n + 1) ]
        
        [ ; NO -- record first patch, and add population of country to desired total
          table:put patches-to-go country-name 1
          let desired (table:get people-to-go country-name)
          set desired-people (desired-people + desired) ]
    ]
  ]

The actual creation process looks at the ratio of the people-to-go and patches-to-go table entries, creating agents when this ratio becomes large enough. As the dashed blue line on the histogram above indicates, the number of agents created (558) is less than the 570 expected from the total world population. This is because of the exclusion of countries with zero patches, as well as the impact of rounding. Making the patches smaller alleviates this problem (with patches of a tenth the area, 563 agents are created), but slows down execution.

  ; Create desired number of people (5)
  ask patches [
    if (population-density > 0) [
      let n (table:get patches-to-go country-name)
      let people-needed (table:get people-to-go country-name)
      
      if (people-needed > 0) [
        ; More people needed for this country
        table:put patches-to-go country-name (n - 1)
        let num-people (round (people-needed / n))
        
        if (num-people > 0) [
          ; People needed on this patch
          table:put people-to-go country-name (people-needed - num-people)
          sprout-persons num-people [ agent-setup country-name ]
        ]
      ]
    ]
  ]
  
  ; Report number of agents created (6)
  show (word "Created " (count persons) " people / desired " desired-people)
  tick
end

Some basics

Finally, the following setup code loads the GIS data, much as in Uri Wilensky’s “GIS General Examples” model. As with many NetLogo features, there is considerable sophistication “under the hood,” making the GIS data import relatively easy. However, the gis:load-dataset procedure will not, unfortunately, load all SHP files.

One point to note is the use of the gis:set-world-envelope-ds primitive to fit the GIS data to the NetLogo world grid. The “DS” version of this primitive is necessary because having max-pxcor twice the size of max-pycor does not give the world the exact 2:1 aspect ratio which the dataset uses.

extensions [ gis table ]
globals [ projection countries-dataset patch-area-sqkm desired-people patches-to-go people-to-go ]
breed [ persons person ]
patches-own [ population country-name area population-density ]
persons-own [ agent-country-name ]

to setup
  ca
  set projection "WGS_84_Geographic"
  
  ; Note that setting the coordinate system here is optional
  gis:load-coordinate-system (word "data/" projection ".prj")
  
  ; Load the dataset
  set countries-dataset gis:load-dataset "data/countries.shp"
  
  ; Set the world envelope. DS = different x and y scales
  ; (fits to actual world dimensions -- world does not actually have a perfect 2:1 aspect ratio)
  gis:set-world-envelope-ds [-180 180 -90 90]

  ; Drawing country boundaries from a shapefile -- as per Uri Wilensky
  gis:set-drawing-color white
  gis:draw countries-dataset 1
  
  ; New code
  set patch-area-sqkm (510000000 / count patches)
  setup-gis
  reset-ticks
end

Going further

Since each agent contains a country name, we can extend this tutorial example with country-specific economic data or population-growth data, and so simulate evolution of the state of the planet using country-specific behaviour. For anyone wishing to experiment with doing so, the full NetLogo code for this tutorial can be downloaded from Modeling Commons. As an example, we can use NetLogo file-read operations to read in this list of countries and median ages into another table (data adapted from Wikipedia). The following additional code in agent-setup will then assign a moderately realistic age to each agent (assuming ages very roughly follow a uniform distribution, and ignoring the differences between mean and median):

  let med-age 28.4  ; use overall world median as default
  if (table:has-key? country-median-age ctry) [
    set med-age (table:get country-median-age ctry)
  ]
  set age random-float (2 * med-age)

There are also complexities associated with building models satisfying multiple constraints (e.g. combining the kind of spatial constraints discussed here with various demographic profiles). For information on that, I refer readers to the agent-based modelling literature (e.g. this book).

Readers can also experiment with loading other GIS datasets (although the GIS extension will not load all SHP files). For example, here is the result of loading Statistical Local Areas from the Australian Bureau of Statistics:

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged GIS, Maps, NetLogo, Population, Simulation, Tutorials | 21 Replies

Running NetLogo from R: a tutorial

Posted on March 23, 2014 by Tony
5

In the previous NetLogo tutorial, I discussed running R inside NetLogo. In this post, we discuss the RNetLogo package for running NetLogo inside R (see also this article). We use a NetLogo implementation of Conway’s “Game of Life” similar to that in the previous NetLogo tutorial.

Conway’s Game of Life

Running NetLogo inside R may be the best option for analytical purposes (the 32-bit version of R is usually necessary). The NLStart function starts NetLogo (with or without a GUI), while NLLoadModel loads the selected model. There are also functions to execute selected NetLogo commands or reporters (once, repeatedly, or until a condition becomes true). The following R function sets a parameter, runs setup, followed by go 100 times, and returns the NetLogo history list defined in the model:

library("RNetLogo")
NLStart ("C:\\Program Files (x86)\\NetLogo 5.0.3", gui=FALSE)
NLLoadModel ("Folder\\Life.nlogo")

run.life1 <- function (d) {
    NLCommand (paste("set average-density", d))
    NLCommand ("setup")
    NLDoCommand (100, "go")
    NLReport ("history")
}

h <- run.life1 (0.2)
plot (0:100, h, type="l", col="red", lwd=2, ylim=c(0, max(h)))

The plot shows how average density varies with time:

Since we can inspect the “inside” of the NetLogo model, and apply all the analytical power of R, many statistical tests can be performed, and many plots drawn.

The following function calculates the average endstate of a run, by taking the mean of the last 10 items in the history list:

run.life2 <- function (d) {
    h <- run.life1 (d)
    n <- length(h)
    mean (h[(n-9):n])
}

More useful is to average 100 calls to that function:

run.life <- function (d) {
    dd <- rep (d, 100)
    h <- sapply (dd, run.life2)
    mean (h)
}

xx <- (0:20)/20
yy <- sapply(xx, run.life)
plot(xx, yy, type="l", col="blue", lwd=2, ylim=c(0, max(yy)))

Plotting the calls to run.life shows how the final average density varies with the initial state. For example, an initial density of around 0.4 gives the highest average final density.

With multiple parameters, it may be best to use R’s optimisation capabilities to find the best parameter values. The parallel-execution features of R are also potentially useful.

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged Game of Life, NetLogo, R, Simulation, Tutorials | 5 Replies

The R extension for NetLogo: a tutorial

Posted on February 11, 2014 by Tony
10

In previous NetLogo tutorials, I discussed some of the powerful features of NetLogo, and the use of extensions. In this post, we will look at the extension which links to the R toolkit (see this article on the extension, and this documentation).

The R extension can be used for data analysis and visualisation, as an alternative to writing data to files and reading them back. Even more helpful is the use of R to implement agent decision-making. Here we provide very simple examples of both uses of R (a third use would be hybrid models such as this).

To use the R extension, follow the installation instructions here. In addition, the rJava package must be installed in R, and the jri folder inside the rJava folder must contain the files JRIEngine.jar and REngine.jar (if need be, Google can find these files). The NetLogo file must begin with extensions [ r ].

Conway’s Game of Life

Our first demonstration is an implementation of Conway’s “Game of Life.” A global variable (history) stores a list of the fraction of cells that are alive at each step. The NetLogo code below uses R to plot that list. The r:eval function executes arbitrary R code, while r:put assigns a NetLogo value to a named R variable (converting, for example, NetLogo lists into vectors). There are also variations of r:put (r:putagent, r:putagentdf, r:putlist, r:putnamedlist, and r:putdataframe – see this table).

to rplot-graph  ;; code for "Plot Life" button
  if (is-life?) [
    r:put "life.hist" history
  
    r:eval "png (filename = \"C:/NetLogo/Life_plot.png\", width = 1600, height = 800, pointsize=18)"
    r:eval "par (mar=c(5,5,0.8,0.5))"
    r:eval "xs <- 0:(length(life.hist)-1)"
    r:eval "plot (life.hist ~ xs, type=\"l\", lwd=4, col=\"red\", xlab=\"Step\", ylab=\"Density\", ylim=c(0,max(life.hist)), cex.lab=2, cex.axis=1.5)"
    r:eval "dev.off()"
  ]
end

The resulting plot is as follows:

More sophisticated statistical analysis in R includes the use of spatial techniques.

The implementation combines variables for both examples in this tutorial:

globals [
  is-life?   ;; distinguish between the two demos
  
  ;; for LIFE
  cell-count ;; total number of cells
  total      ;; total number of live cells
  history    ;; list of totals
  
  ;; for BACTERIA
  window     ;; number of steps over which to analyse
]

patches-own [
  ;; for LIFE
  alive?     ;; is cell alive?
  counter    ;; used to pass number of live neighbours between Phase 1 and Phase 2

  ;; for BACTERIA
  value      ;; simulated chemical concentration
]

turtles-own [
  ;; for BACTERIA
  hist       ;; history of concentration values
]

The setup code for the “Game of Life” is fairly straightforward, using two utility functions to make a patch alive or dead:

;; code for SETUP button -- Life
to setup-life
  clear-all
  set is-life? true
  set cell-count (count patches)
  ask patches [
    ifelse (random-float 1 < average-life-density) [birth] [death]
  ]
  reset-ticks
  set history (list (total / cell-count))
end

to death  ;; this procedure applies to a specific patch
  set alive? false
  set pcolor white
end

to birth  ;; this procedure applies to a specific patch
  set alive? true
  set pcolor blue
  set total (total + 1)
end

The step code is also straightforward, using two phases (one to count live neighbours, and one to compute the future state):

to go  ;; single simulation step for both demos
  if-else (is-life?) [
  
    ;; LIFE
    
    ;; First phase
    ask patches [ set counter (count neighbors with [alive?]) ]
    
    ;; Second phase
    set total 0
    ask patches [
      ifelse (counter = 3 or (counter = 2 and alive?))
      [ birth ]
      [ death ]
    ]
    
    set history (lput (total / cell-count) history)  ;; Record current density
    tick
  
  ] [
    
    ;; BACTERIA
    
    ... snip ...
  ]
end

The second demo (see screenshot above) simulates bacterial chemotaxis. Bacteria move up a chemical gradient by moving in a straight line if the concentration is increasing, and by randomly changing direction if it is not. We make the decision using the reporter below, which calculates the slope of the best-fit line through the datapoints (this is a very simple example of using R code to implement an agent’s decision-making). The r:get reporter is used to return the value of an R expression (in this case, the slope):

to-report calculate-slope [ lst ]  ;; R interface -- calculate slope for values in lst
  r:put "valu.hist" lst
  
  r:eval "valu.index <- 1:length(valu.hist)"
  r:eval "valu.lm <- lm(valu.hist ~ valu.index)"
  report r:get "valu.lm$coefficients[2]"
end

A slightly more efficient option would have been to use r:eval to define an R function in setup, and then to call it here with valu.hist as an argument.

Related uses of R for agent decision-making include implementations of neural networks, clustering, pattern-recognition, and the like.

Returning to our example, most of the setup code calculates a chemical gradient, but some bacteria are also created:

;; code for SETUP button -- Bacteria
to setup-bacteria
  clear-all
  set is-life? false
  set window 4
  
  ;; create gradient
  repeat 3 [
    let x random-xcor
    let y random-ycor
    let v (0.1 + random-float 0.5)
    let r (1 + v * world-width / 2)
    ask patches [
      let d ((distancexy x y) / r)
      set value (value + v * exp (- d))
    ]
  ]
  let min-value (min ([value] of patches))
  let max-value (max ([value] of patches))
  ask patches [
    set pcolor (30 + (9.9 * (value - min-value) / (max-value - min-value)))
  ]
    
  ;; create some bacteria
  crt 20 [
    setxy random-xcor random-ycor
    set color red
    set size 6
    set hist (list value)
  ]

  reset-ticks
end

The implementation of bacterial chemotaxis is straightforward, using the previously defined reporter calculate-slope:

to go  ;; single simulation step for both demos
  if-else (is-life?) [
  
    ;; LIFE

    ... snip ...
  
  ] [
    
    ;; BACTERIA
    
    ask turtles [
      fd 1
      set hist (lput value hist)
      if (length(hist) > window) [ set hist (bf hist) ]
      
      if (length(hist) = window) [
        if (calculate-slope hist <= 0) [
          set heading (random-float 360)
          set hist (list value)
        ]
      ]
    ]
    
    tick
  ]
end

The complete NetLogo file can be found at found at Modeling Commons. Also worth looking at is the RNetLogo package for running NetLogo inside R.

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged Chemotaxis, Game of Life, NetLogo, R, Simulation, Tutorials | 10 Replies

NetLogo post summary

Posted on January 29, 2014 by Tony
6

I have uploaded seven NetLogo tutorial posts so far, and it is probably helpful to list them:

Three posts on integrating NetLogo and Java, via a diffusion example:

  • Part 1
  • Part 2
  • Part 3

One post on list processing, via the Eight Queens puzzle and the Sieve of Eratosthenes:

  • List processing in NetLogo: a tutorial

Three posts on goals and planning in NetLogo agents:

  • Agent goals in NetLogo: a list tutorial
  • Agent goals in NetLogo: the wolf, the goat, and the cabbage
  • Planning in NetLogo: the wolf, the goat, and the cabbage again

All the relevant files are available at Modeling Commons, with a video of the wolf, goat & cabbage solution at Vimeo.

Update 1: some additional tutorials on various extensions (numbers eight through ten):

  • The R extension to NetLogo
  • The NetLogo extension to R
  • The GIS extension to NetLogo

Update 2: I have added a further two-part tutorial revisiting the famous “Artificial Anasazi” model, simulating farmers 1,000 years ago, in what is now Arizona.

Update 3: I have added a brief tutorial on theorem-proving using sequent calculus.

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged NetLogo, Simulation, Tutorials | 6 Replies

Planning in NetLogo: the wolf, the goat, and the cabbage again

Posted on January 25, 2014 by Tony
4

In my Goals in NetLogo tutorial and the Wolf, Goat, and Cabbage follow-up, I discussed the use in NetLogo of lists of agent goals. We will now see how a simple planner can be used to generate such goal lists.

For the planner, we represent possible system states (not including the boat being in transit) with a list of the form [ w g c m ], where the items are 1 if loose on the right bank, -1 if loose on the left bank, and 0 if being carried by the man. The initial state is therefore [ 0 0 0 1 ].

The state representing success has everybody on the left (possibly being carried):

to-report acceptable [ state ] ;; everybody is on the left (possibly being carried)
  report ((item 0 state <= 0) and (item 1 state <= 0) and
          (item 2 state <= 0) and (item 3 state = -1))
end

On the other hand, a state is bad if the wolf and goat (or goat and cabbage) are loose on the same side of the river:

to-report bad [ state ] ;; wolf and goat loose together, or goat and cabbage loose together
  report ((item 0 state = -1 and item 1 state = -1) or
          (item 0 state = 1 and item 1 state = 1) or
          (item 1 state = -1 and item 2 state = -1) or
          (item 1 state = 1 and item 2 state = 1))
end

Our simple planner will keep track of the best goal list so far, with the empty list meaning “not yet.” The best out of two alternatives is therefore the shortest, excluding the case of the empty list:

to-report best-of [ x y ] ;; find the best of two solutions
  if-else (length x < length y and x != [])
    [ report x ]
    [ report y]
end

Checking if the man can legally cross the river requires counting zeros (which represent carried items):

to-report count-zeros [ x y z ] ;; count-zeros is used to count how much the man is carrying
  let n 0
  if (x = 0) [ set n (n + 1) ]
  if (y = 0) [ set n (n + 1) ]
  if (z = 0) [ set n (n + 1) ]
  report n
end

The main planning function takes a state, a history of past states, a partial solution, and the best solution so far. If the state is illegal, or we are going back to it (i.e. we are looping), or if we cannot compete with the best solution so far, we report the best solution so far. If the state we have reached is an acceptable solution, we report the best of two solutions:

to-report planner [ state history partial-solution best-so-far ] ;; the planner itself
  if-else ((bad state) or (member? state history) or
           (length best-so-far <= length partial-solution and best-so-far != []))
    [ report best-so-far ]

    [ if-else (acceptable state)
        [ report best-of best-so-far partial-solution ]

Otherwise, we explore all legal moves from the current state, including all the possible item drops. In each case we calculate a new state and a new partial solution, and update the best solution so far via a recursive call:

        [ let h (lput state history)
          let b best-so-far
          let w (item 0 state)
          let g (item 1 state)
          let c (item 2 state)
          let m (item 3 state)

          ;; Can drop carried items on the current bank
          ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
          if (item 0 state = 0)
            [ let s (list m g c m)
              let new-partial (sentence partial-solution (list "drop" wolf))
              set b (planner s h new-partial b) ]
          if (item 1 state = 0)
            [ let s (list w m c m)
              let new-partial (sentence partial-solution (list "drop" goat))
              set b (planner s h new-partial b) ]
          if (item 2 state = 0)
            [ let s (list w g m m)
              let new-partial (sentence partial-solution (list "drop" cabbage))
              set b (planner s h new-partial b) ]

We also explore all legal pickups and all legal moves left or right. The final value (b) of the best solution so far is reported as the best plan:

          ;; Can pick up items on the current bank
          ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
          if (item 0 state = m)
            [ let s (list 0 g c m)
              let new-partial (sentence partial-solution (list "pickup" wolf))
              set b (planner s h new-partial b) ]
          if (item 1 state = m)
            [ let s (list w 0 c m)
              let new-partial (sentence partial-solution (list "pickup" goat))
              set b (planner s h new-partial b) ]
          if (item 2 state = m)
            [ let s (list w g 0 m)
              let new-partial (sentence partial-solution (list "pickup" cabbage))
              set b (planner s h new-partial b) ]

          ;; Can go left or right if at most one item is being carried
          ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
          if ((m = 1) and (count-zeros w g c <= 1))
            [ let s (list w g c -1)
              let new-partial (sentence partial-solution [ "go-left" ])
              set b (planner s h new-partial b) ]
          if ((m = -1) and (count-zeros w g c <= 1))
            [ let s (list w g c 1)
              let new-partial (sentence partial-solution [ "go-right" ])
              set b (planner s h new-partial b) ]

          report b
        ]
    ]
end

A wrapper function calls this planning function with the initial state, and adds some goals to tell the wolf and goat to stop eating after the final solution has been obtained (this ensures termination, with all goals being satisfied).

to-report wgc-plan ;; top-level reporter to return the full plan
  let init-state [0 0 0 1]
  let p (planner init-state [] [] [])
  
  set p (sentence p (list "instruct" wolf [] "instruct" goat []))
  show (fput "THE PLAN IS:" p)
  report p
end

The final program is available at Modeling Commons. See also here for a video. The diagram below shows the system states explored by the planner, starting at the blue state and ending at the green one (the final “L” or “R” in state names marks the position of the man, lowercase “l” or “r” marks the position of loose items, and uppercase “W,” “G,” or “C” marks carried items). For more complex planning activities, it may be appropriate to interface NetLogo to external Java-based planning software, such as JavaFF.

Share this:

  • Twitter
  • Facebook
  • More
  • Tumblr
  • Reddit
  • Pinterest

Like this:

Like Loading...
Posted in Tutorials | Tagged Goals, NetLogo, Planning, Simulation, Tutorials | 4 Replies

Post navigation

← Older posts

by Tony

Enter your email address to follow this blog and be notified by email of new posts.

Join 1,202 other subscribers

Next Notable Event

iLumen European Solar ChallengeSeptember 16, 2021
The race has begun!

Search

Post history

January 2023
S M T W T F S
1234567
891011121314
15161718192021
22232425262728
293031  
« Dec    

Tag Cloud

  • Aachen
  • American Solar Challenge
  • Astronomy
  • Australia
  • Biology
  • Blogroll
  • Bochum
  • Books
  • Calendars
  • Chemistry
  • Dataviz
  • Education
  • Engineering
  • European Solar Challenge
  • History
  • Homeschooling
  • Housekeeping
  • Illustrations
  • Images
  • Maps
  • Mathematics
  • Mathematics in action
  • Michigan
  • Minnesota
  • Museums
  • Music
  • Nature
  • Networks
  • News
  • Nuon Solar Team
  • Photography
  • Physics
  • Politics
  • Quotes
  • R
  • Remarks
  • Sasol Solar Challenge
  • Science
  • Science Fiction
  • Simulation
  • Solar
  • Solar Team BE
  • Solar Team Eindhoven
  • Solar Team Twente
  • Space
  • Species
  • Top Dutch
  • Vattenfall Solar Team
  • Videos
  • World Solar Challenge

Archives

  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • March 2022
  • February 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013

Twitter feed

Twitter

Instagram feed

Instagram

Facebook page

Facebook

Meta

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com
Blog at WordPress.com.
Scientific Gems
Create a free website or blog at WordPress.com.
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • Scientific Gems
    • Join 502 other followers
    • Already have a WordPress.com account? Log in now.
    • Scientific Gems
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d bloggers like this: