We are scheduled to perform a server upgrade on Thursday, November 30, 2017 between 4 and 6 PM Eastern Time.
Please refrain from submitting support requests related to server downtime during this window.
JMIR Formative Research (JFR) (a sister journal of J Med Internet Res (JMIR) and JMIR mHealth & uHealth, the leading eHealth and mHealth journals by impact factor) publishes peer-reviewed, openly accessible papers containing results from process evaluations, feasibility/pilot studies and other kinds of formative research. While the original focus was on the design of medical and health-related research and technology innovations, JFR publishes studies from all areas of medical and health research.
Formative research is research that occurs before a program is designed and implemented, or while a program is being conducted. Formative research can help
Many funding agencies will expect some sort of pilot/feasibility/process evaluation before funding a larger study such as a Randomized Controlled Trial (RCT).
Formative research should be an integral part of developing programs or adapting programs, and should be used while the program is on-going to help refine and improve program activities. Thus, formative evaluation can and should also occur in form of a process evaluation alongside a summative evaluation such as a RCT.
This journal fills an important gap in the STEM journals landscape, as it publshes sound and peer-reviewed formative research that is criticial for investigators to apply for further funding, but that is usually not published in outcomes-focussed medical journals.
Right click to copy or hit: ctrl+c (cmd+c on mac)
Background: Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construc...
Background: Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. Objective: The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. Methods: Design: Cross sectional Study design Setting, Materials and methods Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. Results: Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97 Conclusions: Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.