With internal and external pressure mounting, many faculty members and administrators are looking for ways to improve the quality of teaching and learning. For faculty and administrators, there are myriad products claiming to do that very thing. But how do instructors find those solutions without feeling like they’re just kissing a bunch of frogs?
Eddie Watson is the director of the Center for Teaching and Learning at the University of Georgia and leads the process of product discovery and evaluation, successfully implementing several courseware and other tools in a decentralized university environment. He shares how his institution helps distinguish a Prince Charming from a polliwog.
Gates Bryant: Courseware is central to personalized learning, but the discovery and procurement process seems confounding. In our research we found over 100 products across hundreds of courses that fit into the courseware category. How is the Center for Teaching and Learning navigating the myriad options?
Eddie Watson: There’s a direct correlation between the job of the Center for Teaching and Learning—related to product discovery—and what’s happening in digital learning overall. Over the last 18 months, so many companies have been trying to get faculty to adopt their solution. Faculty seem to react in one of two ways: early adopters who want to jump in; and cynics who have been inundated to the point of developing a knee-jerk reaction of “no” to every request, no matter the solution’s potential. So the center’s role is now as much about partnering with others across the system to evaluate quality as it is encouraging innovation and adoption.
Let’s talk about quality first. What makes a courseware solution good?
It’s easy to be attracted to a great looking product. Some tools have gorgeous graphic user interfaces—they look fantastic, they feel modern. That’s important to get students on board with using the tool. But, we have to ask, will this really improve learning? Or is it just an additional expense that looks and feels great, but makes no impact?
The technology also has to augment what faculty do in the classroom—not replace it. It has to be the answer to questions like ‘how do we get better?’ and ‘is this letting me do something that I currently can’t?’
How do you bring those conversations to the nay-sayers, or as I like to call them the lecturing skeptics?
Faculty are—and should be—in control and having a healthy skepticism is a good thing. The solution, especially courseware, has to fit their teaching style and course goals. Faculty reserve the right to say “no,” but we still have to foster innovation to bring new tools and products to campus.
The right approach—and the one I’ve found to be most successful—is matchmaking. It’s not about one solution for all faculty, but the right match between a need and a product. This starts with forming relationships across campus. If I go to a conference and learn about a new solution and I’ve been successful in creating relationships with faculty members then I know specifically who might be interested in which product. I’ve had greater success in introducing new solutions by approaching faculty members one-on-one to say, ‘I’ve discovered a product I think you’ll be interested in, rather than holding a 15-person demonstration.
How do you find new solutions? Or, do they find you?
New solutions come to us through an array of avenues. At smaller schools it may be different. Other larger, more centralized institutions may route new solutions through a CIO (Chief Information Officer). But at large, decentralized institutions like UGA, successful solutions typically find their way to adoption through one of two paths: 1) faculty members who bring vendors to campus, with my team offering support; or 2) my center identifying a specific challenge, building a list of solution providers, and then leading a pilot program.
Another highly traveled path are aggressive marketing practices by vendors on campus. We’re seeing those more often these days. Those approaches typically leave a bad taste in the mouths of those on campus and ultimately provide long-term damage to what might otherwise be perceived as a powerful product.
What are the ingredients to a successful pilot?
A successful pilot starts with fully testing the solution. That chance to “get under the hood” is critical. If a product can’t offer a full demo or pilot, that is a red flag.
There also has to be an easy way to get out. If the solution requires a lot of faculty investment to get it going, and it ends up not being a good fit, that can be frustrating. There has to be an uncomplicated exit path that makes the tool easy to try.
Once those are in place, there are three stages to pilot. First a small group of highly invested faculty pilot it. Then there is a second stage post-pilot but pre-adoption where we go beyond early adopters to the average faculty member. Then we finally move to full adoption. To evaluate the success of each stage we look at both impact on learning outcomes and the results of surveys and focus groups with faculty and students.
Beyond that feedback, how do you know a solution is effective? SRI recently published a study of 14 colleges using adaptive courseware and the impact on learning outcomes seemed lukewarm to me. Yet, those colleges are staying the course with these products. How does an institution decide to keep going or try a different approach?
One reason people might be willing to stay with a courseware offering is that the theoretical foundation for an adaptive approach to learning is strong. It has a history in education dating back to the 1950s and programmed instruction—there is a real sense that tools that can adapt to individual needs of students are a great idea—which is why I think higher education overall is in the pursuit of courseware solutions for the long haul. So, while a specific solution doesn’t have measurable progress on learning outcomes the idea behind it is solid so we are willing to experiment longer.
This interview was edited for length and clarity.