Open and Sustainable Innovation Systems (OASIS) Lab working notes

Powered by 🌱Roam Garden

Q: Can deep learning really discover analogical representations?

Content

current belief: probably not?

need to really clarify/sharpen this question: it's too vague atm. i think right now it's a high-level proxy for the question "if we just use the SOTA language models, will that enable us to find analogical ideas/papers across domains?"

so basically, the reasoning for being cautious about deep learning per se goes as follows:

to reason about similarity between relations, we need to have good representations of relations (e.g., relational categories, as in @gentnerRelationalCategories2005).

therefore, computational models of semantics that do poorly with verbs should also do poorly with relations, and therefore struggle with analogy

one strong representative of deep learning representations of semantics is word embeddings, such as sys/word2vec (was the first model to make waves for apparently being able to do analogy)

therefore, we are skeptical that deep learning-based semantics (without explicit searching / tuning for relational structure) will succeed at giving us computational analogy

Q: Can deep learning really discover analogical representations?