top of page

Huddle

Group 318.png
X - 22.png
X - 26.png
X - 27.png

Start recruiting

Ai matches candidates

Candidate list

X - 28.png
X - 31.png

Compare candidates

Candidate details

Huddle is the research and

user-testing part of ONO

Prototype video

User Research

Research Plan

A research plan allows you to define what you need to learn and how to find answers to your question.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Problem statement

First of all, we need to understand our goal: to help project managers and entrepreneurs match with people and businesses globally and engage in projects efficiently and accurately.

​

​

Competitive analysis

We made a competitive analysis, to see which tools are already in the market had similar functions to what we want, and how might we

open our own path in this market.

​

​

 

 

​

​

 

 

 

 

​

 

 

 

 

 

 

 

  • The process of finding relevant candidates is unintuitive

  • Typically, you search for candidates and we get an exhaustive list of questions and text to fill

  • Finding a good fit and research profiles is a time-consuming process and it is difficult to gauge someone’s ability based on the self-promoted information

  • If we could have a strong metric to skim irrelevant profiles, it should make the selection process easier

  • Let our product do the grunt work for you. The AI helps you find the perfect match

 

 

Survey

In reality, we should conduct surveys and discover the interviewees by ourselves. But, in this project, the interviewees were categorized into

4 categories (Thinkers, Planners, Doers, and Donors) and given to us, so we can just directly go through the interview process.

​

​

​

​

​

​

​

​

​

​

 

​

 

 

 

 

 

 

Research question and hypothesis

Before the interview, we need to know what we want to know from this

research. In this project, it was: “how might a recruitment app deliver

match accuracy and demonstrate real value in assessing

person-to-project fit?”

And our hypothesis was that “Using match-making criteria built around

specific professional roles (thinker, planner, doer, donor) will help uncover

effective teams and increase the likelihood of project completion.”

​

​

Interview

During the interview, we had one team member speak with the interviewee,

and another person take notes. We also record the interview and used an

online tool to generate transcribe, and used Mural to categorize all the notes.

 

 

 

Persona

With all the information we collected from the survey and interview, we

were able to create personas based on real data. Persona helps us to

better empathize with the potential users, so we know who are they, what

are their goals, what are their personalities, and what are their pain points,

for example: 

          1.     The manager has to spend time reading hundreds of resumes

                   and cover letters.
          2.     They are interviewing people who are good at acting.
          3.     Receive noises and low-quality content from platforms.
          4.     Some good workers may not be good at marketing themselves.

 

 

 

​
 

 

 

 

 

 

​

 

 

 

​

User Journey

After the persona, we can start to imagine our user journey. We made a user journey that lets the employers set up all the descriptions about the positions, and artificial intelligence will automatically review all the candidates and match them to that row. The employers no longer need to go through each application, and candidates do not have to apply to each job posting. All they need to do is to set up and wait for the perfect matches.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Prioritization and Placement

After all the research, we put our thoughts and ideas together into an idea bank and created a diagram with difficulty on Y-axis and importance on X-axis. The diagram was later divided into 4 corners (easy+high importance, easy+ low importance, difficult+high importance, and difficult+low importance). 

This is the step we write down and categorize all of our ideas based on their importance and difficulty. Here we were able to decide what idea we want to set as a priority, and what ideas we need to give up or leave for the future.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

We also made Product Detail Four Corners to decide where to place each element in our design.

 

 

 

 

 

 

 

 

 

​

 

 

 

 

 

 

 

 

 

 

Sketches

We had 2 rounds of sketches. Round 1 was a group sketch session, everyone in the group presented their ideas to the rest of the group.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

After that, I collected everyone's ideas and made the second sketch.

Screenshot 2022-05-19 030040.png
Screenshot 2022-05-19 031050.png
Screenshot 2022-05-19 032146.png
flow.PNG
Screenshot 2022-05-19 033357.png

Click the image to see the full version

Click the image to see the full version

Click the image to see the full version

Click the image to see the full version

Click the image to see the full version

Screen Shot 2021-04-22 at 16.43 1.png
Screenshot 2022-05-19 034748.png
Screenshot 2022-05-19 033002.png
Group 319.png

Click the image to see the full version

Rectangle 2.png
Screen Shot 2021-04-22 at 16.49 1.png

LinkedIn

Indeed

Toptal

Click the image to see the full version

Click the image to see the full version

sketch.PNG

Click the image to see the full version

User Testing

Based on our sketches, we created our mid-fidelity prototypes for the Usability test.​ We conducted two tests on usertesting.com. First, we presented our medium-fidelity prototype to 3 testers. Then, we put our high-fidelity prototype to the test with 3 other testers. We also had peer review during the class too.

Screenshot 2022-05-19 043235.png
Frame 1426.png

We used the feedback grid to collect all the results and to decide how to improve the product.

Screenshot 2022-05-19 043801.png

Mid fidelity prototype

mid fidelity_edited.png

Feedbacks

  • Leverage the use of AI for input/output – roles can be suggested at Step 4 instead of having the user manually add them

  • Add option and indicator for quantifying of roles in Step 4

  • Form fields on screen 4 may be intimidating to the user

  • Results screen (6th) shows too many results – should indicate the base of selection/sorting of candidates

  • Incorporate a more dynamic way to see the comparison between candidates on the Radar screen (7th)

  • User may not want to press the View Profile button to see another screen – incorporate a quicker way for the user to see more information on the candidate

Second mid-fidelity prototype

mid v2_edited.png

Feedbacks

  • Mostly preferred radar view as it gave a better visual and more detail to the candidate

  • Show skills to be editable by deleting and adding

  • Rephrase "Search Candidates" to "View Results"

  • In list view, instead of a mini radar chart, use a percentage or progress bar

  • The user was unable to recognize Compare icon

  • Very easy to follow, and buttons were clear

High fidelity prototype

hifi_edited.png
  • This is the first high-fidelity mock-up we made. We played with the colour, font and button styles.

  • This was used for testing inside of the group.

Second high fidelity prototype

hifi v2_edited.png

Feedbacks

  • Buttons are clear and easy to follow

  • Clarity needed for bookmark button

  • Make scroll bar in radar view of the candidate more obvious

  • Add more interactions like choosing experience level, bookmark and or view more info on candidates

Identify areas for research, testing, iteration

Testing summary

  • For the most part, participants were able to get through all the tasks easily (avg rating of 4 out 5 on a scale of 1-5 where 5 is the highest for ease and 1 being most difficult)

  • 2 out 3 participants preferred radar view to see details about a candidate, which confirms the importance of showing skill data in a visual format and keeping the list view for a quick glance

  • Some screens were missing interactions that could help finish certain user tasks

  • Viewing profiles from the radar view received mixed reviews, and as a group, we discussed how they should be interacted with

  • Icons like a bookmark, more options remained ambiguous for certain participants, but a majority of them discovered their intended purpose

  • 2 out of 6 participants couldn't notice the progress indicator for selecting team roles

  • we needed to fill holes in our interaction flow to create a smoother process for the tester and give them a sense of completion

  • we researched other ways to present potential candidate metrics to the user besides a radar view.

  • we needed to iterate on certain icons which were unclear to users and find more universal representations for the concepts they denote

Recommendations for validation

  • Users found it very clear to understand what the app was about. They understood their role and the tasks they were expected to accomplish.

  • Users appreciated the structure of menus and found the information within them easy to access, and it was clear how to open them from the main screen.

  • Users liked the idea of having recommended profiles with ratings of percentage matches as it filtered unnecessary results.

  • Users understood and found it easy to navigate within the prototype screens.

Final prototype

fINAL_edited.png

This is the ​prototype we submitted at the end of the project, and used in the presentation.

Link

noun_back_3869766_edited.png

Huddle is the research and

user-testing part of ONO

Designed and created by Zhouquan Peng

All rights reserved

297 Oak Walks Dr, Oakville, Ontario, Canada

pengzhouquan@gmail.com

bottom of page