Select Page

ABN Written Exam Development

The ABN Written Examination allows candidates the opportunity to demonstrate their knowledge of important topics related to the practice of clinical neuropsychology in a standardized multiple-choice format.  The passing score is 70% correct. Examination development included specific attention to content validity, with reference to broadly recognized domains of didactic and experiential training in neuropsychology and the neurosciences, including those outlined in The Training Model of the Houston Conference (Hannay et al., 1998).

Item Selection

The ABN written examination is intended to assess general knowledge in clinical neuropsychology. Development of the examination began conceptually in 2003, with data collection beginning in 2004. Test questions were submitted in multiple-choice format by ABN board certified neuropsychologists who were also ABN board examiners. The question authors provided citations which were checked for accuracy. A subgroup of five board certified examiners were also trained in question writing based on measurement and assessment in education (Reynolds, et al., 2006). The questions were examined, with discussion, debate, re-wording and clarification. Some questions were dropped as being inappropriate to the purpose of the assessment.

Initial Standardization of Question Pool

The body of questions was then administered to a group of 30 ABN diplomates.  Questions demonstrating floor or ceiling effects were eliminated.  For example, if all individuals answered a given question correctly, it was eliminated as being too easy. If all individuals answered a question incorrectly, it was eliminated as being too difficult. The questions were then reworked again using training through the Measurement and Assessment in Education item writing training noted above.  A final pool of 317 questions was retained.

Survey of Important Training and Clinical Knowledge Areas

A survey was then taken of the membership of ABN for those areas of clinical importance based on the Houston Conference Training Guidelines.

Respondents were asked to rate the importance of particular knowledge areas in the practice of clinical neuropsychology, the frequency in which these areas are used and the potential harm that was possible if these knowledge areas were not present. From these ratings, the relative weighting of the number of questions needed in each area were assessed. Ratings indicated that each domain should represent the following percentage of questions on the examination:

  1. Brain Behavior Relationships 15%
  2. Psychometrics                        13%
  3. Neurological Disorders           12%
  4. Psychiatric Disorders             12%
  5. Neuroanatomy                        11%
  6. Neuropathology                      11%
  7. Neuroscience                         10%
  8. Neuroimaging                           9%
  9. Neurochemistry                        7%

Each question was then assigned to the most appropriate knowledge domain.  Questions that did not fit clearly into any of the assessment domains were not assigned.  The domain weightings were used to choose the number of questions needed from each domain (e.g., 15 questions representing brain-behavior relations for Form 1 and 15 for Form 2). From this, a final pool of 200 items was drawn.

Assignment to Form A or Form B

The 200 items were then administered to a body of 25 practicing ABN diplomates. These 25 ABN diplomates were administered all 200 questions.  The passing frequency for each question was calculated.  This information was then used to divide the items into two 100 item forms (A and B) where each individual knowledge domain was approximately equally weighted across both forms of the exam and with the overall pass rate being generally equivalent between the 2 forms. A paired samples t test revealed that none of the individual knowledge areas were significantly different (p. = >.05) in their scores for Form A vs Form B. Overall performance for Form A vs Form B showed a correlation of .99 (p= <.001) between the two forms. Therefore, the two forms were judged to assess comparable domain-specific knowledge areas in a sufficiently comparable manner to support their use of parallel forms.  The overall mean score for form A and B combined was 70%.  Below is a representation of the percentage of items for each knowledge domain, and summary data for equivalence of Forms A and B. 

Knowledge Domain % of
Item Pool
Form A
% Correct
Form B
% Correct
Mean A&B
Brain-Beh Relations 15% 68.8 67.7 68.3
Psychometrics 13% 73.5 72.9 73.2
Neurological Disorders 12% 61.7 63.7 62.7
Psychiatric Disorders 12% 70.0 71.0 70.5
Neuroanatomy 11% 73.1 70.5 71.8
Neuropathology 11% 60.0 61.5 60.7
Neuroscience 10% 80.0 82.4 81.2
Neuroimaging   9% 74.7 79.6 77.1
Neurochemistry   7% 64.6 65.1 64.9
All Domains 100% 69.6 70.5 70.0



Cross Validation

Following this step, both forms of the multiple choice exam were then administered to an additional 50 ABN diplomates.  Form A and Form B were administered in counterbalanced fashion.  The performance of each question was carefully scrutinized to ensure that each question was in fact representative of basic neuropsychological knowledge consistent with the level of knowledge generally found at the board certification level.

Lastly, 20 board certified neuropsychologists were administered the revised 100 item versions of Form A and Form B in a counterbalanced fashion.  For these additional 20 subjects, the correlation between the 2 forms was r = .81.  The mean of Form A was 71.1.  The mean of Form B was 68.9.  The overall mean score for the combined Form A and Form B groups was 70.0, in agreement with the value for the original standardization group of 25.  Thus, 70% established as the passing score for the examination.

Examination Updates

The ABN written examination underwent an analysis of data collected from 2009 to 2014.  This analysis was performed by an outside consultant who proposed that, with some minor changes in wording of some questions, the examination would be further improved.  These changes in wording were made.  Additional data was collected in the interim which included gathering samples of individuals with varying levels of training in psychology, with and without specialized neuropsychology training.  In early 2016, the collected data was again analyzed by the consultant in comparison to the 2014 data set.  This analysis included an item analysis and a signal detection analysis which showed significant improvement in the statistical properties of the examination.  Signal detection analysis with an assumed competence rate of 90% revealed that the revised exam shows an improved classification rate, with a sensitivity rate of .84, and a specificity of 1.00.  The examination exhibits improved overall item quality as well; in fact, the effect size for improvement in item quality equaled 1.50, which was statistically significant (p < .0001).

The consultant’s 2016 report will be used as the basis for ongoing monitoring and revision of the examination moving forward.  ABN sees the written examination as a dynamic entity, requiring continual review and revision in order to optimize the reliability and validity of the examination, and in order to maintain relevance of content as the field of neuropsychology continues to grow.


Hannay, H. J. (1998). Proceedings of the Houston conference on specialty education and training in clinical neuropsychology, September 3–7, 1997, University of Houston Hilton and Conference Center. Archives of Clinical Neuropsychology, 13(2), 157-158.

Reynolds, C.R., Livingston, R., & Willson, W. (2006). Measurement and Assessment in Education (1st ed.). Boston, MA: Allyn & Bacon/Merrill.

*Updated 05/01/16, based on 2016 review and analyses of archival and current data, with slight revisions in the item table. Derivation of the passing score is clarified.