User Research Portfolio
Emotional Connection with an AI Companion
A pilot test in exploring how players might feel connected to an AI companion in a narrative game.
Role: Lead User Researcher
Duration: 5 weeks
Method: Playtest, Survey
Tools: Unreal Engine, Microsoft Forms, JASP
Participants: 8 DigiPen students gathered via Convenience Sampling
Context & Research Goal
The game, Barton, includes an AI companion designed to accompany the player and support them in escaping the planet.
My goal was to design a method to see if there was a relationship between the player’s word count when conversing with Barton and their emotional connection with the AI companion.
“Connection Value”
The survey provided included a variety of Likert scale and yes/no questions
I coded this data into a singular quantitative value
Some questions contributed more than others, depending on how directly the question asks about the participant’s views of Barton
The final product is the participant’s connection with Barton, representing a number scaling from 0-30 (0 being no connection, 30 being the highest possible connection)
JASP Analysis
Scoping
Defined “Connection Value”, reviewed literature on emotional connection.
Playtesting
8 participants (Ages 18-30). Thirty-minute sessions with word count gathered.
Survey
Post-session emotional self-report questionnaire.
Analysis
Count words during casual talk, code survey data into “Connection Value”, input both measures into JASP.
Key Finding
With a p-value under 0.05, the Total Word Count may be an accurate predictor of Connection Value
Reflection
The biggest challenge was coding a player’s connection with an AI through a survey. In future studies, I’d incorporate a tool in the game to store all of the participant’s conversations, along with Barton’s responses, to more accurately understand the context of these conversations.
Calculating Cuteness with Multiple Variables
This pilot study involved calculating participants’ perception of a cat’s species and how being exposed to a kitten photo would effect them.
Role: Lead User Researcher
Duration: 7 weeks
Method: Survey
Tools: Qualtrics, JASP
Participants: 20 DigiPen students gathered via Convenience Sampling through a post giving them course credit when the study was completed
Context & Research Goal
The survey included 9 adult cat photos of different breeds, in addition to 3 kitten photos are the same breed as 3 of the 9 adult cat photos.
The goal of this pilot study was to create a measure for “cuteness” regarding cats. Then, using that measure, the study goal was to see if there was a significant difference when an adult cat breed had an accompanying kitten photo vs. when there was no kitten photo shown.
Cuteness Measure
Each survey included a different combination of kitten photos and cat photos
Cats were divided into 3 groups with ratings of different variables of cuteness for each cat (friendly, soft, etc.)
Prior to each group, one of the kitten photos were shown to the participant
I separated the ratings given to cats where their kitten photo was displayed beforehand vs. ratings given to cats without kitten photos, found their average rating, then compared the two variables
Scoping
Researched what defines cuteness, then structured the participant groups so each adult cat has an accompanying kitten photo at some point.
Survey
Mid-session questionnaire with cat/kitten photos. Cat and kitten photos were randomized based on which group the participant was in.
Analysis
Combine participant results into a singular “cuteness” rating, separate cats without kitten photos from cats with kitten photos, input measures into JASP.
Key Finding
With a p-value far above 0.05, the presence of a kitten had little to no effect on a participant’s already existing view of select cat breeds.
Reflection
The biggest challenge was finding kitten photos that looked similar to the adult cats in the photos. While I managed to find cats that looked similar, some photos ended up being blurry or had a complex background rather than transparent. In future studies, I would use photos of the same cat, one from when they were a kitten and one where they’re an adult.
Assessing Tension, Encounter Flow, and Controller Usability
Conducting a playtest with three completely different variables without one interfering with another.
Role: Lead User Researcher
Duration: 3 weeks
Method: Playtest, Survey
Tools: Unreal Engine, Microsoft Forms, OBS, Webcam, Xbox Controller
Participants: 13 DigiPen students gathered via convenience sampling at a DigiPen playtesting event
Context & Research Goal
The game, Eyes of the Forest, is a narrative game meant to put the player in the shoes of a little mouse attempting to escape the claws of large owl-like spirits.
I was originally tasked with finding areas in the game where suspense thrived, and was lacking. As the playtest neared, I was asked to test two additional factors: a specific level’s readability/difficulty and controller support, both of which were recently implemented.
Suspense Measure
The survey contained a combination of attributes of suspense and immersion
The overall “suspense” was based on the combination of these two attributes
The participant’s body language was also taken into account for suspense (which we primarily focused on during close encounters with the large owl-like spirits)
Scoping
Brainstorming ways to include three different variables in one study with two other researchers.
Playtesting
13 participants (Ages 18-30). Thirty-minute sessions with face and gameplay recorded.
Survey
Post-session emotional self-report questionnaire that included questions regarding suspense and some questions from the Immersive Experience Questionnaire (IEQ).
Analysis
Watched all recordings with two other researchers and analyzed participants’ facial expressions and gameplay stats. Suspense was calculated both through expressions and survey.
Key Findings
The level that was being focused on had an immense difficulty spike due to challenging parkour.
Participants were feeling the desired amount of tension during encounters with enemies.
Controller played well regardless of whether the participant primarily used keyboard or not, though there were bugs present and reported.
Reflection
The biggest challenge I encountered was coding the video data with two other researchers. While it did save a lot of time, with each of the 13 recordings being an average of 25 minutes long, there was some miscommunication of what data I wanted to gather. As a result, some of the data received was vague. In response I asked the other researchers to clarify their findings. In future studies, I’ll create a small document describing exactly what data points I need to gather to prevent any setbacks.