1. 8
  1.  

  2. 8

    This reminds me a lot of an anecdotal story one of my professors told me from his time working at NASA.

    His team was working on running simulations of long-distance manned spaceflight. In particular, the goal of their simulations was to determine an algorithm that would optimally allocate food, water, and electricity to 3 crew members. The decided they would try running a genetic algorithm with the success criteria being that one or more crew members would survive for as many days as possible before resources ran out.

    It started off fairly predictably– 300 days, 350 days, 375 days of survival. Then fairly abruptly, the algorithm shot up to around 900 days of survival. The team couldn’t believe it! They were fairly pleased at the 375 day survival results as it was.

    As they started digging into how this new algorithm worked, they discovered a small problem. The algorithm had arrived at a solution wherein it would immediately withhold food and water from two of the crew mates, causing them to die from starvation and dehydration. From there, it would simply provide the surplus remaining resources to the surviving crew member.

    The team realised that the success criterion of “one or more crew members would survive for as long as possible” was not actually the criteria that they really wanted, and the algorithm settled in at 350 days worth of resources once again once they adjusted the algorithm to keep all of the crew alive.

    It’s often the simple underlying assumptions that distinguish murderous spaceships from spaceships that keep their crew alive a little longer in extreme conditions.