Search
Program Calendar
Browse By Day
Browse By Time
Browse By Subject Area
Browse By Session Type
Search Tips
Conference
Virtual Exhibit Hall
Location
About NTA
Personal Schedule
Sign In
We use large language models to examine the informational value of context in financial statements’ narrative text for explaining the mapping from income to taxes. Conceptually, tax expense is a function of taxable income, where the function reflects the applicable statutory tax rate. In practice, however, this function is distorted by accounting rules and heterogenous tax codes. We aim to understand to which degree corporate narrative disclosures explain these distortions. To do so, we quantify textual information disclosed in annual reports and train deep neural networks that use this information as contextual input to explain deviations between book income and tax outcomes. We show that context has significant explanatory power, improving the mapping between book income and tax expense by about 12%. We further show that this improvement does not extend to cash taxes paid, consistent with a closer alignment of narrative disclosures with accruals-based numbers than with cash-based numbers. Further, we compare the contextual value of MD&A disclosures with that of the narrative information included in the income tax footnote, finding that MD&As are more informative in explaining tax expense but that income tax footnotes are more informative in explaining cash taxes paid. We also provide insights into the relationship between context informativeness and the narratives’ underlying disclosure topics, as well as into the sensitivity of our results to firm characteristics. Collectively, our findings demonstrate the value of contextual information in understanding distortions between book and tax numbers.