Open and Sustainable Innovation Systems (OASIS) Lab working notes

Powered by 🌱Roam Garden


Title: Scholars Before Researchers: On the Centrality of the Dissertation Literature Review in Research Preparation




the level of synthesis in regular manuscripts frequently subpar (p.4)

Developed a rubric for scoring literature reviews, including a category for synthesis

Referenced in

January 31st, 2021

These axes of quality have emerged from conceptual and critical reflection on the nature of evidence/conceptual synthesis @strike1983types, as well as efforts in doctoral education to create rubrics for assessing the level of synthesis in dissertation literature reviews @granelloPromotingCognitiveComplexity2001@booteScholarsResearchersCentrality2005 and empirical analyses of dissertation examiners' comments on dissertation literature reviews @lovittsMakingImplicitExplicit2007@holbrookInvestigatingPhDThesis2004.

May 1st, 2020

Resurfacing @booteScholarsResearchersCentrality2005, which actually includes an intriguing data point - EyFcOj7yr not sure how this answers my original question though: this gives p $$p(synthesisissue | issue)$$, not $$p (synthesisissue | paper)$$. Also, need to look into it: usually people give suggestions anyway, even if hte paper is overall ok. We want to know about the proportion of times a lack of synthesis happens that is serious enough that it undermines. Maybe $$p(reject|synthesisissue)$$ will give us that?


Ineffective synthesis is not a reason to fail a dissertation! Qualities associated with literature reviews in "acceptable" (or even "very good") dissertations fall well short of implicit and explicit criteria for effective synthesis (e.g., @strike1983types, @booteScholarsResearchersCentrality2005