Assessment of clinical competence and communication skills with patients has
always been in the center of attention among teachers and faculty members when
evaluating the expected learning outcomes of medical education (Troncon,
2004; Conigliaro and Stratton, 2010). Several assessment
tools have been proposed to evaluate the clinical skills of medical and dental
students in the guideline released by the Association for Medical Education
in Europe (AMEE) in 2003 (Shumway and Harden, 2003).
Traditional formats of clinical examinations lack objectivity as the students
are assessed on different patients by different examiners who mark the candidates
subjectively. The Objective Structured Clinical Examination (OSCE) has been
shown to be a valid and reliable assessment instrument in areas such as history
taking, physical examination, treatment planning and management in a comprehensive,
consistent and structured manner (Crossley et al.,
2002; McLaughlin et al., 2006;
Ruesseler et al., 2010). It is a multi-station performance-based
examination format for objectively assessing clinical competence in a structured,
standardised manner (Agarwal et al., 2010). The
number and length of stations can vary depending on the format in use. However,
an average OSCE is said to consist of 15-20 stations which students rotate individually
(Alinier, 2003). Each station presents part of a case
or problem using simulated/standardised patients, slides, audio-tapes, photographs
or laboratory reports and requires examinees to perform a specific procedure,
solve a problem or record requested findings (Harden, 1988;
Agarwal et al., 2010).
Since its introduction, OSCE has widely been applied to different levels of
training i.e., undergraduate, postgraduate and national board exams in many
disciplines of healthcare education and health professionals have begun to debate
many aspects of this process (Rushforth, 2007). Although,
it provides an objective evaluation of a wide range of clinical competencies
with reduced risk of examiner bias developing a reliable OSCE requires extensive
resources such as personnel and facilities, funding, support from the faculty,
the administrative authority and students (Hodges et
al., 2002a, b; Watson
et al., 2002; Schuwirth et al., 2003;
Shumway and Harden, 2003; Bartfay
et al., 2004). Dental education has undergone drastic changes over
the recent years and the authorities have been incorporating innovative concepts
of education and evaluation into dental curriculum and the educational programs.
Since its introduction, OSCE has been regarded as a successful method of evaluation
in different academic settings however, there is uncertainty regarding the validity,
reliability and efficacy of the current format implemented in Iranian dental
Schools. This study aims to assess the dental faculties' attitude toward OSCE
in Shahid Beheshti University of Medical Sciences in 2010.
MATERIALS AND METHODS
About 103 dental faculty members from Shahid Beheshti University of Medical Sciences, Tehran, Iran were invited to participate in this descriptive cross sectional survey. Those who were not familiar with the OSCE were excluded. So as not to influence the candidates response, the questionnaires were anonymous.
The questionnaires were distributed and collected on the due date of winter
OSCE in 2010. The OSCE took place simultaneously in 10 departments (i.e., Oral
medicine, Periodontics, Oral and maxillofacial surgery, Oral radiology, Prosthodontics,
Operative dentistry, Endodntics, Pedodontics, Orthodontics and Oral pathology).
The format of the exam in each department was determined according to the requirements
and curriculum materials of the specific program. The questionnaire was developed
based on a comprehensive literature review and modified from a previously validated
instrument used to evaluate a similar group of students (Noohi
and Motesaddi, 2009).
To ensure content and construct validity, two senior faculty members with experience
in dental education, curriculum design and evaluation, reviewed the instrument.
Modifications were made to better fit the study objectives and ambiguous items
were clarified. The instrument then was reviewed again and further refined for
use in the study. It consisted of eight close ended questions and the faculty
members were asked to rate each question in a 5-score Likert scale from strongly
agree to strongly disagree. The relative frequency of responses in each department
was calculated and subjected to Kruskal Wallis test to determine statistical
significance between the groups.
RESULTS AND DISCUSSION
The response rate among faculty members was 67.1% (N = 68). Table 1 shows the frequency (%) of responses to each question. About 37.9% of faculty members found the OSCE an effective tool to assess communication skills and patient management. A considerable proportion of the examiners (84.4%) believed that OSCE is a time consuming method of evaluation yet 71.3% confirmed that bias is significantly reduced in OSCE compared to other means of evaluation. The majority of faculty members (92.4%) confirmed the necessity of standardization among the faculty members for designing and conducting a reliable OSCE.
The overall attitude of the lecturers toward OSCE was positive and did not
show any significant difference between different departments (p = 0.163). OSCEs
have a central role in providing a formative evaluation of clinical competence
and patient management skills plays in different clinical disciplines including
dentistry (Hodges et al., 2002a; Mavis
and Henry, 2002). In order to develop and expand the use of OSCE, it is
necessary to identify the associated limitations and weaknesses of its application
(Hodges et al., 2002b). Thus, researcher sought
to evaluate dental faculties perception and attitude toward this method
in Shahid Beheshti University of Medical Sciences.
Troncon (2004) proposed that absence of a favorable
attitude toward innovative educational programs among students and faculty members
in Brazil may ascribe for the limitations of implementing OSCE for clinical
evaluation. However, the findings demonstrated a positive attitude among the
faculty members of Shahid Beheshti School of Dentistry in Iran which was consistent
with that of Noohis in Kerman university of Medical Sciences, Kerman,
Iran (Noohi and Motesaddi, 2009). This indeed describes
the positive trend toward educational development strategies among Iranian dental
schools. Other studies have further confirmed the success of OSCE among other
assessment methods (Fields et al., 2007).
|| OSCE questions and frequency of responses
Regarding the efficacy of OSCE as a reliable means of evaluating communication
and patient management skills, the majority of faculty members stated that in
the presence of a Standardized Patient (SP), this aspect of clinical competence
would effectively be assessed by OSCE. SPs could be lay persons trained to portray
a scripted patient presentation in a standardized and consistent fashion or
a healthcare professional i.e., senior residents or faculty members (Wang
et al., 2004; Iramaneerat et al., 2008).
In the modified OSCE carried out in 2009 in Pakistan, inclusion of standard
patient as one of the items of the exam was shown to be effective in evaluating
different aspects of clinical competence including patient management (Iqbal
et al., 2009). Cannick et al. (2007)
performed an OSCE to evaluate the communication skills and tobacco cessation
counseling technique of dental students and reported that OSCE is a unique evaluation
tool that can be used to provide a standardized assessment of students
competency in several areas of dental education including communication skills,
medical history taking and treatment planning (Cannick et
al., 2007). Other studies have also shown the use of standardized patients
to be effective in evaluating patient management skills in pediatric dentistry
and geriatric practice (Fabiny et al., 1998;
Zartman et al., 2002; Tung
and Thomas, 2009). Although, training proctors (dentist or non-dentist)
to act as a standardized patient may be associated with specific costs, the
benefits accrued can easily repay the efforts and resource expenditure (Kay
et al., 1994; Wallace et al., 2002;
Amano et al., 2004).
In the present study, most faculty members believed that OSCE is a less-prone-to-bias
method of evaluating clinical competencies compared to other evaluation tools
(Table 1). Developing a standard check list for the examiners/raters
that includes all the tasks the student is expected to demonstrate, renders
a fair scoring process which enables the faculty members to judge the student's
clinical knowledge or performance (Doig et al., 2000;
Zartman et al., 2002).
Numerous studies have documented the fact examination bias is significantly
reduced in a well designed OSCE (El-Nemer and Kandeel, 2009;
Iqbal et al., 2009). However, some researchers
believe that OSCEs are vulnerable to systematic biases due to rater fatigue.
Incorporating frequent rest periods for the examiners and involving multiple
raters to evaluate the examinee's performance based on a standard checklist
could reduce the possibility of such errors (Humphris and
Kaney, 2001; McLaughlin et al., 2009). Regarding
the number of stations, researcher found limited data to dispute whether it
has any effects on the quality of OSCE. Evidence suggests that the number of
stations should be determined by the curricular material and the time available
to perform the OSCE (Zartman et al., 2002; Faryabi
and Farzad, 2010). In the present study, however the majority of researchers
believed that more stations increases the internal validity of the OSCE (Table
1). OSCE in dentistry may include written or practical stations. Thus, a
professional team of specialists in each specific program of dentistry and experts
in dental education should be responsible for designing and implementing the
OSCE and training the standard patients. The majority of the faculty members
in this study unanimously believed that designing a robust, reliable and standard
OSCE requires more time and specific trainings and standardization of the organizing
This finding was consistent with other reports which documented the high costs
associated with implementing a successful OSCE (Holmboe
and Hawkins, 1998; Barman, 2005; Humani
and Foroughi, 2008; Iqbal et al., 2009).
However, despite all the extensive resources and the expenditure, OSCE has been
shown to be the most reliable means of evaluating practical and clinical skills
in different disciplines of healthcare education and thus should be further
developed to meet all the requirements of a successful assessment tool (Furlong
et al., 2005; Saboury and Dastjerdie, 2010).
The results indicate that although, the faculty members found OSCE an expensive and time consuming method if designed and implemented by a professional team, it can render valuable results in evaluating students clinical knowledge and performance.