What is right-sized research?

I was a bit of a rebel in graduate school. My time was equally divided between volunteering in local schools; protesting corporate education reform, both locally and nationally; and — of course — engaging with research. I thought that if I learned as much as I could about education, policy, and statistics, and earned the prestige of a doctorate that I could wage a war against the “data-driven” decisions that I felt were ruining public education.

Then, this wild thing happened — I attended one of the first researcher-practitioner partnership meetings between the University of Pennsylvania’s Graduate School of Education and the School District of Philadelphia. I noted to my colleagues and the District’s Deputy of Research and Evaluation that this was the first time I had been inside the building, collaborating with District leaders rather than outside the building, yelling at them. (True story).

That meeting changed my life — the mutual respect, the commitment to doing research in service of schools (and school leaders and teachers and families and students), and the incredible ideas to make this new collaboration work blew my mind.

Over the next two years, I served as project manager for the partnership and conducted my dissertation as part of its flagship research initiative. And we partnered, I mean really partnered — working hand-in-hand at all stages of the research process. The results were incredible. We were collecting data that was meaningful to the district, to teachers, to students, and to families AND gaining insights that contributed to the education research community and broader lessons about what worked (and didn’t work in school improvement).

I was hooked.

Research is often thought of as this lofty ideal trapped in an ivory tower and reserved for “experts” — the traditional kind. It isn’t always accessible and it tends to be on a timeline that doesn’t jive with people who need to make decisions quickly. As a consequence, there is a disconnect — we have people at universities or big firms generating “traditional research” hoping that one day (perhaps after publication) it has a practical impact, and we have entrepreneurs and business owners trying to MacGyver together something in the “real world.”

Getting involved in the research-practice space taught me that research can look completely different than I ever imagined.

Researcher-practitioner partnerships — which have gained popularity in the past decade or so are a gamechanger. These collaborations seek to address the gap between research and practice through having researchers work with organizations to generate research that is rigorous and meaningful.

Both “traditional experts” and those on the ground have unique expertise that can be leveraged to create measures that strike a balance between practicality and rigor, and redefine educational success in creative and exciting ways.

I first started thinking about “right-sizing” research when I was working with 4.0, supporting their entrepreneurs in evaluating their ideas to improve schooling.

Entrepreneurs in 4.0 programming test their ideas on a super short timeframe with a small number of people. We’re talking, tests that may take place over only a matter of hours or days with 5-10 people. The point of this piloting process is to fail fast, to be nimble and iterate, to collect information that you can use to decide whether or not you should consider pursuing your idea (i.e., - does it works) and if it isn’t working, how you might change it.

This type of research lowers the stakes for entrepreneurs and participants but the pressure is on for researchers. It presents an interesting challenge — how do you confidently evaluate the effectiveness of an idea when it is only being tested with 5 people for 30 minutes? Or, 3 people over three months?

Looking at this from the lens of traditional research, these sample sizes and time horizons are laughable. But, I don’t think that means we say screw it and decide to evaluate effectiveness based on our gut feeling of whether or not things are working. It isn’t impossible to measure change at this stage. I refuse to believe that.

If you are building a computer program — I don’t want to say it is easy because lord knows I couldn’t do it — but it is more straightforward. The code runs or it doesn’t run (right? Comp Sci people please correct me!). In the social impact space, when you are working with humans on things like learning and development, figuring out what works is trickier because you won’t get an error message or red flashing light or spinning rainbow of death telling you something isn’t right.

It is a challenging, but fun problem to solve. And, one that all entrepreneurs are faced with: what does success look like for my idea? And, how can I measure it? Oftentimes we think that what we are doing is so special and unique that we couldn’t possibly quantify it. But we need to. In the words of Dr. Andy Porter, former dean of Penn GSE:

Anything that’s important, can be measured.

And, as @EduChangemakers wisely stated back the 2016 Deeper Learning Conference:

If you do not find a way to measure the things you value, you will be forced to value the things that are measured.

The measurement and evaluation status quo can change; and, in order to truly advance innovative solutions to the problems facing society, it must. We have to stop being afraid of research or thinking it isn’t “for us.”

I don’t think you need lots of money, time, people, or the university stamp of approval to do good research.⁠

Yes, there’s value in rigor and structure, and experts are great, but, I genuinely believe that you can work on a small scale in a quick time frame, with measures of success that reflect your community's values and STILL have quality research.

👉🏻 I believe that cheap and quick doesn’t imply a lack of quality. ⁠

👉🏻 I believe that good research can look many different ways and that different levels of rigor are appropriate for different stages of ideas. ⁠

👉🏻 I believe in what I like to call the “aluminum standard” for research: be practical, flexible, and accessible.

I consider “right-size” research, a remix of researcher-practitioner partnerships. It has all the major tenants of a researcher-practitioner partnership but on a much smaller scale. Right-size research can be adapted for ventures of any size but is particularly valuable for ideas that are in their beginning stages.

Right-sizing research is about maintaining maximum rigor while being super practical and responsive to the environment you are in. It’s designed with a smaller budget and sample (less people), and a much shorter time frame. It intends to minimize risk by testing ideas sooner, at a smaller stage, for less money.

To complicate the narrative about what is required to do good research by shrinking research down to a size that many will find uncomfortable and beginning the process of validating and right-sizing measures with entrepreneurs and their communities… I can’t think of anything more exciting than that.

Yes, there’s value in rigor and structure, and experts are great, but, I also believe that you can work on a small scale in a quick time frame, with measures of success that reflect your community's values and STILL have quality research. No randomized control trial necessary.⁠

⁠And, that's why I say I'm not a regular researcher... I'm a cool researcher. (Well, that and my super cute outfits + affinity for pop culture). I believe in doing research that works for you - not research that makes you jump through meaningless hoops, confuses the hell out of you, or takes so long to do that you can't even use the information. Say goodbye 👋🏻 to the status quo of research, and hello 👋🏻 to couture (i.e., made for you!) research.


Previous
Previous

Accept the compliment

Next
Next

What's a sample? And, why does it matter?