Research Note

Examining different artificial intelligence models’ ability to pass Certificate of Theory in Accountancy-level tax questions

Asheer J. Ram, Wayne van Zijl
South African Journal of Economic and Management Sciences | Vol 29, No 1 | a6348 | DOI: https://doi.org/10.4102/sajems.v29i1.6348 | © 2026 Asheer J. Ram, Wayne van Zijl | This work is licensed under CC Attribution 4.0
Submitted: 12 June 2025 | Published: 23 January 2026

About the author(s)

Asheer J. Ram, Margo Steele School of Accountancy, Faculty of Commerce, Law and Management, University of the Witwatersrand, Johannesburg, South Africa
Wayne van Zijl, Margo Steele School of Accountancy, Faculty of Commerce, Law and Management, University of the Witwatersrand, Johannesburg, South Africa

Abstract

As Artificial Intelligence (AI) models become more sophisticated and entrenched in accountancy professions, this raises questions about their ability to outperform humans. This article is one of the first to examine the ability of five different AI models to pass professional tax examinations.
Contribution: This article provides evidence about AI’s current ability to support or replace tax practitioners. It provides a baseline to track the progress of different AI models as they evolve. Only Grok passed, while ChatGPT, Claude, CoPilot, and Gemini failed. Notably, the AI models provided persuasive answers despite being incorrect, negating their ability to replace tax practitioners.


Keywords

artificial intelligence; education; ChatGPT; Claude; Copilot; Gemini; Grok; taxation

Sustainable Development Goal

Goal 8: Decent work and economic growth

Metrics

Total abstract views: 522
Total article views: 357


Crossref Citations

No related citations found.