A curated collection of research papers on Large Language Models (LLMs), transformers, prompting, fine-tuning, and multi-modal systems. Maintained by Sean Moran.
While humans sometimes do show the capability of correcting their own
erroneous guesses with self-critiquing, there seems to be no basis for that
assumption in the case of LLMs.