At the beginning of each new semester or school year, teachers are faced with the challenge of remembering names for a large number of new students. Design an experience to help an educator match faces to names, with the goal of shortening the time needed to reach complete un-aided accuracy.
An experience that uses AR technology to take the learning process off paper and bring it into the real world classroom where teachers need it.
Why are names so difficult to remember?
One of the main reasons seems to be that names are completely arbitrary descriptors of a person. They don't describe the person in any meaningful way that helps our brains create an association with them.
Additionally, the name may come from a language that we aren't familiar with, adding the additional cognitive burden of remembering the pronunciation of sounds that are new to us. This is an especially important point for teachers whose daily interactions with their students have very strong impacts on their self-esteem and identity.
On a neurological level the problem is one of moving names from “working memory," our short-term memory which has a generally limited capacity of about 7 items at a time, into our “long term memory” which is more permanent storage.
Committing names to Long Term Memory (LTM)
There are several techniques that can be utilized to help commit new things to our LTM, including the use of imagery, creating a “Pegword System," the pairing of words or creation of verbal anchors such as rhyming words or alliteration, and the Method of Loci, or tying items to a specific location.
Additionally, because memory is heavily tied to context, Teachers should be encouraged to study student names in the location they will need to recall them, primarily the classroom.
How do teachers learn names now?
To gain some insight into this question I emailed 8 teachers in my personal circle asking them how many students they typically had and what tricks they employed to learn their names.
My pool of teachers ranged from active Pre-K teachers to Middle School Teachers in the UK to retired teachers to Instructional Designers to Developers teaching at General Assembly, and according to the responses, the tricks employed were remarkably similar.
All of the teachers mentioned employing verbal repetition (not considered a very effective learning device), most asked their students to confirm pronunciation and most had access to photos of their students that they could use to practice visual association. Some teachers mentioned that they specifically had students seated alphabetically at first to aid in name recall, then once they felt confident in their name knowledge, shuffled their seats around.
Based on that foundation of research, I started making notes of ideas, features and flows that could incorporate those mnemonic techniques, available assets and teacher habits.
Since location and visual cues are so integral into our recall process, using Augmented Reality in my solution was an early idea and I spent some additional time familiarizing myself with recent advances in that space.
Defining the User’s Experience
Once I had a rough sketch of a feature set and interaction flow that I thought was worth developing, I wrote up my idea in the form of a short narrative describing the experience of a teacher using the app to prepare for the school year.
(NOTE: There were 4 main iterations of this story, some of which were not completed when I realized that the direction wasn't working and branched off to a new version. In the interest of brevity, I have only included the final version here.)
As Susie sat down in her classroom with her Chromebook in laptop mode to prepare for the new school year the next morning, she knew her curriculum was in good shape and that she should really focus on learning her student list.
She had received her student list and a folder of the school photos from last year of all of her students shared via google drive, so it was just a matter for getting them into her head and working on pronunciations. She always hated when she had difficulty pronouncing student names, it was embarrassing and she knew it bothered even the thickest-skinned of her 6th graders.
The seating app, upon being opened seemed to realize that it had been a while, and asked if she was starting a new school year. When she answered “Yes,” it asked her how many classes she had this semester to which she entered “5”.
Sure enough, 5 new classes were created on the home screen. Each class had a little note saying that there was no information entered yet about the students. At the top of the screen was a reminder that if she had her student photos labeled with the student’s name that she could drag and drop them onto the correct class, if not just select the class she wanted to work on and she could manually enter the student names.
Susie was excited to try dragging and dropping the files, this was a new feature and last year she had to enter the names manually. She selected the first folder, and dropped it onto the slot for her first class of the day.
The box for the class bounced a little as the folder disappeared into it, and the text inside changed to “Organizing Students,” Pronunciation then “Gathering Pronunciation Suggestions” and finally “Ready for Review & Seating”. Before Susie clicked on the box she noted that it listed that she had 27 students in that class, that seemed about right, she thought.
As the screen changed, 2 options appeared. One was “Review Pronunciation Guides” and the other was “Assign Seats”. Susie always liked to seat students alphabetically at first until she got a handle on their names, so not being worried about seating, she selected “Review Student List and Pronunciation Guide”. This way she could start reviewing pronunciations before she had to call them out in class.
She reviewed the names of all the students, practicing saying their names out loud, with their photo on screen, and the app providing a feedback score on her pronunciation.
After getting through the list once, she clicked the back button so that she could setup the Seating Chart.
Then she was asked if she wanted to be able to practice identifying the students in AR, she selected yes. She walked around the room with her Chromebook in tablet mode, the camera turned on. When the camera found a desk, she dragged the next name in the list onto the desk in the video. When she released the name over the desk the photo of the student appeared floating in space above it and a dot was added to a small seating chart in the corner of her screen.
Once she had assigned all the student names to a desk in her classroom, the app asked Susie to find her desk in the room and assign her name to it. Being a pro at dragging names onto desks by now, she did that quickly and was presented with an exciting animation declaring that the seating chart for her first class was all set up. Would she like to practice student names now? Or setup seating arrangements for her other classes?
Since she had already practiced pronunciation when she reviewed the list the first time, she instead clicked the back button to go through the same setup process for her 4 other classes. When she did, an alert popped up on screen asking her if she wanted to automatically assign seats alphabetically using the same desk arrangement, Susie selected yes and was delighted to see that the student photos and name cards were already assigned to each seat for every class.
Feeling (almost) ready for the new year, Susie headed home to get some sleep before her early first day.
The next morning, when Susie arrived in her classroom and opened her Chromebook there was a notification from the app reminding her that it was best to practice memorizing student names on location and asking if she had 20 minutes to practice her morning classes right now. Susie was running a late and feeling overloaded since it was the first day, so she selected “No thanks, but remind me tomorrow”.
Later, when the students started filtering into her class, Susie asked them all to stand up at the side of the room. She pulled up the Seating Chart, tapped on the first seat and the name card expanded on screen. In big letters under the photo was the student’s name, and a giant button to trigger audio recording.
As was her custom, after calling out the student’s name, Susie asked the student to tell her how they preferred it to be pronounced. After the student told her how to pronounce their name, Susie, without looking at the laptop, would press and hold the spacebar which caused the app to start an audio recording. Then she repeated their name back to the student while looking at them to see if she got it right. She released the spacebar after saying the name, causing the audio recording to end, Susie loved the ability to make quick audio notes but wanted to be careful not to record the students.
After seating all the students, she left the app open on the seating chart. Each name was clearly displayed above the seat, which was handy reference. Throughout class the next over the next few days, when a student did something or mentioned something specific or memorable, Susie would click the student’s name on the chart and quickly type a 1-2 word note about it.
When Susie opened the app in review mode to practice the student names the app used those notes to generate a list of synonyms that matched the first letter of the student’s name. Susie selected the word from the list that seemed like the best fit to her and that word was added to the student’s label on the seating chart.
Once she had alliterative characteristics for all of the students, the app asked if she wanted to practice, after selecting “Yes,” the seating chart was cleared of all the names, they flew over to the left side of the screen and she was left with just the photos of the students on the seating chart.
She worked on finding the correct name from the randomized list and dragging it onto the photo of the corresponding student. Whenever she did she was prompted to practice pronouncing the name and was given an accuracy score, she could repeat or move on as desired.
Once she could quickly assign each name from a list, she was prompted to pick up her device and walk around the classroom with it pointed at the seats. The app placed the photo of each student in the classroom, with the descriptor hint over the photo. At first the photos were highlighted in order and she was asked to recall the students name, then randomly, then eventually the descriptor hint was removed and she had to say the name based solely on the photo. Then the next level, all the other student photos were hidden, and only one photo at a time was shown as the prompt. As the final level, she was quizzed on just photos out of context entirely, so not placed in classroom, or with descriptor hints.
Writing out the experience long form like this both helped me to explore the needs and context of the person experiencing it, and helped to define the flow of the app in a very concrete way.
Now that I had a specific usage scenario and app flow described, it was time to actually visualize all the of the screens and make sure that the navigation/interactions were viable.
I used Figma to create a set of clickable wireframes that were based on the app described in my story. This process allowed me to focus in on navigation, written copy and information/feature hierarchy.
It also pointed out to me the gaps in the experience described, both in terms of the flow I was focused on and additional features that would likely be in a fully realized app. For the purposes of this exercise I focused on smoothing out issues related to my core experience, and not flushing out every potential point of navigation.
With my usage scenario and wireframes in hand, I tapped one of my teacher friends who is local to review both and give me some feedback. She pointed out some areas where there were considerations I wasn't aware of and another area that wasn't as important in her point of view.
I made another round of revisions to both, and then moved on to visual design.
Visual Design & Interaction Sketches
For a short turnaround concept project that doesn't explicitly belong to the branding of a particular company it was important that I have a strong design reference to build from.
I chose vintage polaroid branding. It has a clean, modern feel with bold colors that are exciting but don't feel childish, because the user is the teacher not their students.
Building on the Polaroid color and shape language, but incorporating more modern typography I was able to quickly create a visual palette to work with.
I focused my visual design on one of the signature interactions of the experience, the classroom setup. Showing how easily the user could setup their classroom, both building the seating chart and using the “Method of Loci” mnemonic device to reinforce their memory of the students.
This is one of the more novel interactions I sketched out in my wireframes, and I wanted to make sure it was communicated clearly so I also created a motion prototype.
Final Static Mockups
These mockup show the progression to a fully setup classroom, with all the students assigned to seats. The list on the right shows the empty states of the Student list buttons, indicating both that the student has already been assigned a seat and using color as a wayfinding element to help the user visually filter and match student names.
Iterations on Visual Designs
In the cards themselves you see a clear hierarchy, the student’s name and photo are primary. Next is the alliterative descriptor that the app helps the teacher generate, and finally a pronunciation reference whose accuracy the teacher will verify (and can correct) when they ask the student how their name is pronounced.
The pronunciation recording feature, described in the usage scenario and wireframes, is intended to provide teachers a subtle method to update their notes that doesn’t interfere with their engagement with their students.
The more mnemonic devices you incorporate into your study the easier it is to commit the subject matter to your Long Term Memory and recall it later.
The use of AR (and Cloud Anchors) to build the seating chart and lay the groundwork for the memory testing features both encourages the teacher to practice in the location they will need to recall the information (another study/recall aide) and engages teachers with a delightful and novel approach.
The mini-seating chart in the bottom left provides an overview of the classroom, and teachers can tap on a seat to cause the cards to fade, and highlight the selection. This element is meant to provide an overview, but the overall design of the experience encourages the teachers to actually walk around the classroom as they will when they engage with their students in class.
As much as possible the goal is to take the learning of student names out of abstract environments like textual, paper lists or digital spreadsheets and bring it into the actual classroom. That’s where the teachers need it, and by contextualizing the photos it further reinforces that students aren't just names to memorize, but people to get to know.
There is a really wide range of technologies being employed in different educational institutions around the world, and my understanding is that it varies dramatically even between different institutions in the same location.
My concept makes some assumptions about access to student photos, and what form that takes that is based in some anecdotal research but may not be representative on a larger scale.
I am also making some assumptions about the kinds of devices/connectivity available to teachers in classrooms, again based on anecdotal knowledge around the rise of tablets and Chromebooks in the educational sphere.
Additional Technology Notes
This concept uses several APIs to help reduce the cognitive and mechanical workload for the user:
- Android AR APIs and Cloud Anchors specifically, are used to help place student photos in their real world classroom.
- Name pronunciation API (for example SpeechAce.com) is used to pre-fill suggested pronunciations, giving teachers something to start with before class and reducing the need for manual entry to only the names that are incorrect.
- Synonym API (for example RhymeBrain.com) is used to turn descriptor notes into alliterative adjectives.
I don't have production experience with any of the above APIs, so there may be feature caveats that I’m not aware of, but superficially they seem to support the needs of the experience.