Growing up in San Francisco with parents who devoted their lives to the nonprofit and public education sectors, my primary contact with Google, Facebook, and Apple was as a user. I was only vaguely aware of their physical presence off the 101 freeway on the way to my grandparents’ house, and I can’t remember ever talking to anyone who worked in tech.
When I moved back to the Bay Area after college to work for a nonprofit, this divide between “the techies” and “people like us” seemed larger than ever. Protesters were vandalizing Google buses as symbols of gentrification, Twitter was under attack for not fulfilling its commitments to support mid-Market development, and there was no way I could afford to live anywhere but my parents’ house.
So when I took a new job at an edtech startup in SOMA, I immediately felt torn between the Bay Area I’d grown up in and the parallel universe I was about to join, as if I was straddling the San Andreas fault during the “Big One” that we’ve all been anticipating. I spent the next few months desperately trying to convince my family and friends that no, I was not trying to replace teachers with computers, and yes, we did get work done in between happy hours and hackathons. Then, I spent the next few years trying to convince myself that technical tools like those I was building were actually making a positive difference in students’ education.
Three years later, I’m a graduate student in Stanford’s Learning, Design, and Technology program, and I’m still not entirely convinced. I believe that technology has the potential to make learning more equitable, engaging, and relevant for learners of all ages and backgrounds. But I’ve also seen first-hand how technology can create detached and impersonal learning environments, and even widen the achievement gap we see today. I’ll never forget my first visit to a 1:1 classroom, where I eagerly sat down next to a student who proceeded to spend the entire period staring mindlessly at her iPad, opening each app just long enough to log in and choose an activity before closing it and moving on to the next one.
So how do we know we’re contributing to the solution and not aggravating the problem? For me the answer lies in a trendy yet powerful framework: human-centered design.
Design has gotten a lot of attention in popular culture recently, from the New York Times magazine’s Design Issue last fall, to a best-selling book about how to design a meaningful and fulfilling life. I’m usually skeptical of anything that gets this much hype, but beyond the post-its and whiteboards I think the five-step human-centered design process is actually quite intuitive.
- Empathize: Figure out who your humans are. For whom are you designing, what do they care about, and what is their life like?
- Define: Synthesize what you learn in order to come up with a point of view about their needs.
- Ideate: Think of as many possible ways to address these needs.
- Prototype: Build some representations of your potential solutions.
- Test: Go back to your humans to test them out. Bring back some feedback, and continue iterating on your point of view and testing your ideas until you’ve got something that really addresses the need.
The process lends itself easily to education: begin with your learner, understand what she needs to learn and why she’s struggling to learn it, and then start building some solutions to test out in the real world. Designers aren’t the only ones who follow these steps—many educators use a similar “backward design” process to develop curriculum, and even elementary school students can benefit from using this framework.
So if the process is this simple, why do we devote entire classes, workshops, and degrees to perfecting our understanding of human-centered design? To me, the challenging part of this approach is that it requires the designer to set aside what she wants to design, in service of what her users need her to design. When we read about a new and innovative technology we may immediately begin to imagine how it could be used in an educational setting. But if we instead start with the learning goal and work backwards to achieve it, we find that the most effective solution doesn’t necessarily rely on artificial intelligence, virtual reality, or even an app. Sure, there’s almost always a way that advanced technology can address related issues of access, scale, or convenience, but let’s not force learning into a tech mold where it doesn’t fit.
I recently worked on a cross-cultural communication project with the Boys and Girls Club of the Peninsula. Going into the project we had dreams of developing cultural competency through conversations with avatars, or using machine learning to track users’ progress and suggest areas for improvement. But once we really got to know our learners and understood the context of their learning, it turned out the best tool for the job was a Google Form. So we put aside our technocentric egos and built a really awesome form.
While the design process itself fits on the back of a napkin, the personal humility and diligence necessary to follow it takes practice. The good news is that the skills required to be a thoughtful and successful designer are also skills that make us better friends, co-workers, and citizens. A little more empathy, collaboration, and optimism can go a long way in designing more effective learning tools, and in bridging the divides between designers and learners, researchers and educators, and yes, Bay Area natives and techies.