It turns out a game doesn’t necessarily need a huge update for us to think it’s any better, according to a study performed by Paul Cairns, a professor of human-computer interactions at the University of York.
Cairns and his colleague Alena Denisova asked 21 people to play “Don’t Starve”, an adventure game that drops the player in a randomly-generated world where they must do what they can to keep their character alive for as long as possible.
In the first round, players were told the map would be randomly generated. In the second, they were told it had been updated with an adaptive AI now creating the worlds based on the players’ skill levels. In reality, neither round used an adaptive AI and all worlds were randomly generated to an identical degree.
After both rounds, the players were asked to fill out a survey on their experience, where they reported improvements in the game’s performance in the second round.
“The adaptive AI put me in a safer environment and seemed to present me with resources as needed.”
“It reduces the time of exploring the map, which makes the game more enjoyable.”
To further support the findings, a second test was performed with 40 new subjects, half under the impression an adaptive AI generated the worlds, and the other half being a control group, knowing the worlds were randomly generated. The results fully confirmed the findings of the first trial, the placebo effect showing clearly in players who thought they were playing a newer, updated version of the game.
The results are certainly curious when we think of it in terms of game development, marketing and continued support after launch. If our perception of video games can really be this easily deceived, if we can think a game is better simply for being told it’s been updated with new features, then how much innovation and improvement is needed in a new game for it to still be positively received? Food for thought, eh?
What’s your take on this placebo test?
Let us know in the comments!
(Picture credit: New Scientist)