Paper list

1 minute read

Published:

This post is a reminder of the papers that I have read and am going to read recently.

  1. The papers that I haven’t finished reading:
    • Rogers, Anna, Olga Kovaleva, and Anna Rumshisky. “A primer in bertology: What we know about how bert works.” Transactions of the Association for Computational Linguistics 8 (2020): 842-866.
  2. The papers that I am going to read:
    • Bouraoui, Zied, Jose Camacho-Collados, and Steven Schockaert. “Inducing relational knowledge from BERT.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 05. 2020.

    • Yao, Huihan, et al. “Refining Neural Networks with Compositional Explanations.” arXiv preprint arXiv:2103.10415 (2021).

    • Jiang, Zhengbao, et al. “How can we know what language models know?.” Transactions of the Association for Computational Linguistics 8 (2020): 423-438.
      • generate or select better prompt to retrieval knowledge from LMs
    • Talmor, Alon, et al. “oLMpics–On what Language Model Pre-training Captures.” arXiv preprint arXiv:1912.13283 (2019).
  3. The papers that I have read:
    • Yin, Da, Tao Meng, and Kai-Wei Chang. “Sentibert: A transferable transformer-based architecture for compositional sentiment semantics.” arXiv preprint arXiv:2005.04114 (2020).

    • Petroni, Fabio, et al. “Language models as knowledge bases?.” arXiv preprint arXiv:1909.01066 (2019). - it is non-trivial to extract a knowl- edge base from text that performs on par to di- rectly using pretrained BERT-large.