In this part, we’ll talk a bit more about what information we were actually tracking and looking into, and why digital playtest is not everything and real-world tables actually matter.
If you’ve missed Part 1 of this series, you can check it out here.
PROCESSING (AND EVALUATING) THE DATA
During playtesting we were tracking a lot of data, both from the feedback forms the players filled out and from the game stats. The data we’ve gathered was highly rewarding, but processing and evaluation of all this data were also quite challenging.
As was mentioned before, this data is invaluable to us – the data and the option to replay each game session directly on BGA are both the strongest tools we have to make sure different elements of the gameplay are balanced. The expansion is highly asymmetrical, so we had to be extra careful with tinkering and tuning.
One of the most important things we tracked was the average and maximum scores of each leader, to see how consistent their performance is. From this data, we saw which characters needed tweaking, and over time we were also able to define which leaders were more suited for beginner players and which are more difficult to play.
We also tracked other things like what scores were achieved on which of the two new research tracks, the number of cards gained and played, the number of turns per leader and per round, how often different leader-specific bonuses were used in case of the Falconer and the Mystic, and much more.
Often, when interpreting the data, we even had to go through particular gameplays to identify what caused some significant score deviations. Sometimes these anomalies were indeed caused by that leader’s abilities, but often there were other factors in play as well, like big differences in players’ experience or an unusual combination of conditions (like cards in the card row, sites discovered, etc.) that were more favorable for that particular game situation.
Sometimes, the differences between beginner and more advanced players were also quite significant – it was visible both in the base game and in the expansion implementations that some cards and strategies were often neglected by the beginners but turned out to be quite powerful in the hands of experienced players. We had to take these differences into account as well before we could start drawing conclusions.
DIGITAL AND PHYSICAL WORLDS COLLIDE
Of course, digital playtesting is not everything, and it was crucial for us to playtest Expedition Leaders with people on the “real-world” table. Some things that are working smoothly when everything is automated might turn out to be not ideal when translated to a physical environment. To identify this, we had to see how people were operating the game with their hands. How is the tablespace working? Is anything too fiddly? Are players forgetting anything? These and more questions were constantly asked and observed when we brought the expansion to the physical table – various live playtesting events, limited of course by the current COVID situation, helped us check and tweak the experience.
The Mystic, for example, had a special token that went through some iterations after seeing players handling it on the table. Under specific circumstances, players were supposed to store the Fear cards under this token. However, it turned out to be too fiddly and the token often ended up buried under the cards instead of being on top of the pile. So we reworked our original idea and the token became a board on which you could store the cards.
This is just one of the many changes we made thanks to the feedback we got from the people attending our testing events. We feel very fortunate that so many amazing people were willing to help us with the playtesting! Many things were fine-tuned and perfected thanks to their help and we believe these changes, though sometimes seemingly subtle, made a world of difference.
Thank you for joining us on this journey! Come back next week on Thursday, September 23, to read the third and last part of this series.