Open and Sustainable Innovation Systems (OASIS) Lab working notes

Powered by 🌱Roam Garden

February 3rd, 2021

This is an "Orphan" page. Its core content has not been shared: what you see below is a loose collection of pages and page snippets that mention this page, as well as snippets of this page that were quoted elsewhere.

{{TODO}} Draft Figure X to discuss in this section to illustrate *in context* the hypothesized benefits of epistemic models

Draft section on "we don't have this stuff in practice and don't know how much it really works and under what conditions"

Tools generally are "iTunes for papers" @qianITunesPapersRedefining2019a - I think this is very fair

@bosmanInnovationsScholarlyCommunication2016 - can see by far most popular tools are "iTunes for papers":

Reference management tools like sys/EndNote, sys/Mendeley, and sys/Zotero were the most frequently mentioned "cite" tools by researchers worldwide responding to an online survey in 2015. These top 3 tools collectively accounted for ~60% of responses.

European-leaning, but we get some confirmation/corroboration in smaller, more focused samples

Fragmentation/variation - no single tool

@qianOpeningBlackBox2020

Scholars varied substantially in the ecology of tools used for synthesis: no single tool was used to encompass every (sub)task, and few tools were common across all scholars.

Tool choice constrained by "extrinsic" factors like whether it's free or not, or overlap with collaborators' practices

@hoangOpportunitiesComputerSupport2018

researchers basically never used specialized software for their systematic reviews: instead, the most common tools were sys/Microsoft Excel and sys/EndNote

"one-off", little to no reuse

if there's info on this, should be in scholarly workflows lit...

Yet, there are some hints of optimal paths, that on closer analysis, seem to align with the epistemic models @qianOpeningBlackBox2020 @chanWhereRubberMeets2020

What we see now is not infrastructure for synthesis. Instead, we see people either resort to all sorts of "hacks" and workarounds, or put in a substantial amount of work to "mine" publications for what they need (for an evocative example of this, see @knightEnslavedTrappedData2019). We have a whole cottage industry that is dedicated to fueling workarounds of this sort, for systematic reviewing. While these hacks often work well enough for the task at hand, they are rarely transferred in systematic ways across projects and people, violating the dimensions of "reach or scope" and "embodiment of standards", laid out by @star1996steps.

most academic studies aren't set up to be able to evaluate these, because we need serious usage. on the flip sides, industry folks either aren't sharing data, or aren't collecting it

so we lack a good understanding, beyond weak circumstantial/anecdotal evidence of the value of these design patterns

Comment Joel Chan I think within the three objectives/RQs subsections (e.g., here) is where the theory stuff fits in most naturally. We give the intuition for the SOTA, and then we dive into the hypothesized benefits in more detail, discussing the theoretical backing and lack of empirical evidence.

Integrate stuff from Q: What is synthesis? into this section to help build intuition for what is really needed to enable synthesis

February 3rd, 2021