Dillon Chi's Logo


ActivQuest

In October I organized 8 students in two teams to compete at SDHacks 2019 against 800+ other students. The sponsoring company, ActivBody, chose us because we, through research and future-casting techniques, independently arrived at their system’s next two secret releases. [Read More]

Responsibilities: Concept Development, User Research & Wireframes | Team: Charlene Joy Dela Cruz, PengMao Li & Susie Moon

Timeline: 36 Hours | Software: Unity, Figma, Adobe XD & Keynote


Current System Analysis

Our first step was to see how ActivBody’s current apps work. Our two teams spent about three hours user-testing both the main system Activ5 and the secondary games authored by ActivBody.

ActivBody tracks users’ activity using their pressure-detecting device. Activities revolve around exercises that compress the device in various ways (e.g., between the knees or hands).

Part 1

Friction Points

While user-testing, we used Post-its to note the friction points and our observations onto our board.

Some of the takeaways included exercises starting without user confirmation, and insufficient instructions from the picture on where to accurately place the device or which of the system’s 150+ exercises to start with.

Part 2

Futurecasting

We asked ourselves, what if the infrared scanning technology in our iPhones becomes integrated in computers in three to five years?

Between our interview with ActivBody and the knowledge that their target demographic was white-collar workers, we decided that their future system should live on the desktop computer rather than on a tablet.

Part 3

Card sorting

After ideating on features that our personas would like to have, we used Google Slides and Skype to conduct card sorts to determine the preferred information architecture of the system. At the planning stage, we knew this had to be done by early afternoon on day 2 in order to have a chance at scheduling the ideal persona-matching people to perform the tests.

Part 4

Low-Fi

Thank goodness for the rolls of paper we brought, no one had to run out at 3 AM looking for a ream of paper.

From our Crazy 8’s ideation session, we had decided that our system would guide users through a third-person video game. The video game would show exercises (e.g., rowing, hiking, chopping) that the player would have to complete to progress in the survival game. By using infrared-tracked semi-transparent avatars, we hoped to alleviate some of the aforementioned pain points.

Part 5

Mid-Fi

We used Figma to create our mid-fidelity prototypes and collaborated in real-time, which was essential given the time crunch. Since Figma doesn’t offer a free trial membership, we shared our credentials with our test participants, so they could test our prototype.

In the animation above, the first screen shows exercises tailored by the AI and the user’s progress in the story, the second is the in-exercise screen, and the third shows an award. We decided that flattening the IA would be the best for the target user’s time,, given that this system would mainly be used on breaks.

Part 6

Takeaways

These were the highlights from the Hackathon. To view the whole process click here.

As Interaction Designers at ArtCenter, we are taught a process. Multi-step research exercises along with the iterative process needed to create a great system. Our steps included lean canvas, affinity diagramming, competitive research, posture analysis, blob scenarios, and user journeys. But how do we condense a 14-week class plan into something executable in 36 hours with three teammates? Five of us 4th-term students came up with our own research plan and design criteria.

So we didn’t code a single line at this Hackathon, but both teams took a risk and decided that it would be better to thoroughly research the topic and design the solution with 80% of the time instead of jumping straight in. This experience helped cement my desire to become a User Experience Researcher after I graduate ArtCenter next year.

Final

Complete process and research artefacts