Professional Documents
Culture Documents
Assessment Solutions
Table of Contents
EXECUTIVE SUMMARY ............................................................................................................................ 4 INTRODUCTION ......................................................................................................................................... 5 ITEM AUTHORING ..................................................................................................................................... 6 EXAM OBJECTIVES .................................................................................................................................... 6 BLUEPRINTING ........................................................................................................................................... 6 TEST CREATION......................................................................................................................................... 7 REVIEWING ................................................................................................................................................ 7 FIELD TESTING .......................................................................................................................................... 7 HOSTED ONLINE ASSESSMENT ENGINE .......................................................................................... 8 AUTHORING AND QUESTION BANK MAINTENANCE ................................................................................... 8 Practice and Certification Tests ........................................................................................................... 9 Review / Workflow ................................................................................................................................ 9 TEST CONFIGURATION & DELIVERY.........................................................................................................10 Test Generation and Delivery ..............................................................................................................10 Test Instruction ....................................................................................................................................11 Question Pages ....................................................................................................................................11 Review Questions .................................................................................................................................12 Feedback and Hints .............................................................................................................................13 Marks Configuration ............................................................................................................................13 Auto Grading .......................................................................................................................................13 Importing and Exporting Tests ............................................................................................................13 Proctoring ............................................................................................................................................13 REPORTING...............................................................................................................................................14 Individual Student Performance Chart ................................................................................................14 Comparative Analysis ..........................................................................................................................14 Individual Student Activity Details ......................................................................................................14 Performance.........................................................................................................................................14 SCHEDULING AND ADMINISTRATION.........................................................................................................14 User Management ................................................................................................................................14 Test Schedule Management ..................................................................................................................14 Extensibility .........................................................................................................................................15 Multilingual .........................................................................................................................................15 High Availability ..................................................................................................................................15 Open API based Architecture ..............................................................................................................15 Ease of Customization..........................................................................................................................15 HOSTING ...................................................................................................................................................16 PHYSICAL TEST DELIVERY...................................................................................................................17 ONLINE DELIVERY ....................................................................................................................................17 TESTING CENTERS ...................................................................................................................................17 OTHER UNIQUE ASSESSMENT TYPES......................................................................................................17 PROCTORING ............................................................................................................................................17
Assessment Solutions
PSYCHOMETRIC ANALYSIS ..................................................................................................................18 MEASUREMENT THEORIES AND MODELS.................................................................................................18 CLASSICAL TEST THEORY ........................................................................................................................18 Item Analysis ........................................................................................................................................18 Internal Consistency Analysis ..............................................................................................................19 Test Reliability Analysis .......................................................................................................................19 Test Validity Analysis ...........................................................................................................................19 Standardization and Norm Setting .......................................................................................................19 ITEM RESPONSE THEORY (IRT)...............................................................................................................20 One-Parameter Logistic Model ...........................................................................................................20 Two-Parameter Logistic Model ...........................................................................................................20 Three-Parameter Logistic Model .........................................................................................................20 TOOLS AND TECHNOLOGY........................................................................................................................20 OTHER SERVICES ....................................................................................................................................21 CUSTOMIZATION AND INTEGRATION .........................................................................................................21 TECHNICAL SUPPORT ...............................................................................................................................21 HELPDESK.................................................................................................................................................21 MENTORING ..............................................................................................................................................21 TEST-PROCESS AUDITING .......................................................................................................................21 PRE-TEST SCREENING .............................................................................................................................21 SCHEDULING AND ADMINISTRATION.........................................................................................................21
Assessment Solutions
Executive Summary
Strategy Assessment Design Test Item Development & Management Psychometric Services Technology Integration & Management Assessment Administration
Strategy for diagnostic assessment, formative tests and summative certification based on program needs Planning for difficulty levels, discrimination requirement, statistical plan Item bank creation based on overall strategy, audience and exposure size. Item sun setting strategies Field testing for Item characteristics, Test characteristics, reliability and validity analyses IMS QTI compliant test bank. Flexible workflows and advanced statistical analysis tools Over 2500 physical locations for test delivery Hosted Test engine, Data management services
NIITs Assessment Solutions provides a complete range of offerings from strategy and design to implementation and administration to its customers in the Corporate, Education and Government segments. NIIT provides full solutions as well as components of its offerings to various customers for both low stakes as well as high stakes tests. The products and services include: Assessment Strategy Design & BluePrinting Item Authoring Assessment Engine Hosting Delivery Reporting and Psychometric Analysis Scheduling, Administration, Helpdesk, Proctoring, Tech Support
Assessment Solutions
Introduction
Security Item Authoring Hosted Engine Delivery/ Admin Psychometric Analysis Other Services
Design
Web-based authoring
Randomization
Online Delivery Testing Centers Proctoring Multiple Question Types Security Standards Compliance Security
Item Analysis
HelpDesk
Pre-Test Screening
Item Authoring
Item Authoring
Item authoring is a specialized skill which requires both - formal training as well as systems and processes for the items to be scientific and effective. For any test on any subject, item authoring follows a standard process. On an ongoing basis, over-exposed and poor-performing items must be retired and new ones added. Test Creation follows the process outlined below:
NIIT Test Review & Fixes
Blueprint Creation
Test Creation
Completed Test
Review by Client
Exam Objectives
In this phase, the client provides objectives that are identified according to the required skill level to be tested through the test. The objectives are further split into specific outcomes of assessment items (SOA). The SOAs specify the cognitive ability to be assessed.
Blueprinting
Next, the NIIT Test Design Team does Blueprint Creation. A test designer creates a blueprint with the help of the instructors and an analyst and the program chair reviews the blueprint. The blueprint is a specification table that determines the configuration of a test. It lays down rules for the composition of the test. The blueprint ensures test reliability and the definition of: Exam objectives Difficulty level for each test item Types of questions, such as, multiple choice, sequencing, or match the following. Percentage distribution across various ability levels for each objective
The blueprint also enables the analyst and designer to decide the weight assigned to a topic or an SOA, which in turn defines the number of test items to be created for a topic/SOA. The weight assigned to a topic is decided according to the: Importance of a topic to measure the particular ability Importance of the topic in the context of the overall assessment The weight assigned to a topic decides the relative importance of the topic and helps define the marks to be allocated to each test item. The blueprint is developed to ensure that: The weight given to each topic in each test is appropriate, so that the important topics are not neglected. This contributes to the validity of the test. The abilities tested are appropriate. For example there are sufficient questions requiring application and understanding of logical reasoning.
Item Authoring
Weight of topics and abilities are consistent from test to test; this contributes to the reliability of the test.
Final Blue Print From the blueprint, applying the testing conditions derives a Test Configuration Table. Testing conditions such as number of items in a test, time allowed, maximum marks and marks assigned to a test item should be determined after careful consideration. The table consists of the: Randomization strategy - Randomization could be based on item difficulty, exam objectives, or a combination of both. Item scoring details Negative scoring (Y/N) Time allocation Number of questions Cut score
Test Creation
Test Creation follows blueprint creation. This involves the actual writing of the test items and is done by the NIIT Test Creation Team. To do this, the team: Identifies the difficulty level for each identified SOA based on the Blooms Taxonomy of cognitive ability Identifies the item type for each SOA based on the analysis and the difficulty level Creates each test item based on NIIT Test Item creation Standards and Guidelines. These guidelines are based on sound Instructional Design principles and correct use of language.
Reviewing
After the items have been authored, each item must go through a series of rigorous reviews to eliminate errors, ambiguity and biases of any kind. NIIT Review After Item Creation, items are reviewed and fixed in the NIIT Test Review and Fixes phase. Items undergo a rigorous review process. Each test item is checked against various parameters to ensure that the right ability is tested with the right test item. Only those items that clear the review process are used in a test. Reviews are of the following types: ID Review Ensures items are in accordance with Instructional Design principles Language Review Ensures clarity of language Technical Review Ensures items are technically correct. Review by Client The Review phase is followed by the Review by Client. In this step, the Program Chair would review the items and identify any changes to the items. Fixes (if any) as suggested by the Client are made by the NIIT Test Creation Team. The test is now ready for delivery.
Field Testing
Once the items are ready for deployment, they are put through a field test where a statistically significant number of test-takers who are representative of the final test-taker audience respond to all the items in a controlled environment. The results data collected from this round of testing is subjected to statistical analysis to assess the difficulty, discrimination and performance of the distractors. Poor performing items are modified or dropped. Item Maintenance Over time, based on exposure and how well each item has performed in the tests, some items need to retired periodically and replaced by new items. This is an ongoing activity.
Test Item Authoring: Provides workflow based question authoring functionality with a builtin multi-level review mechanism. Test Configuration: Allows for generating exam, based on specifications. Interfaces with Authoring for specification and sends the generated item list to authoring. Test Delivery: Allows for launching individual candidate test, using the test pack allocated for the candidate. Interfaces with authoring and sends the results in a standard format. Performance Reporting: Interfaces with all the components above and generates various standard and custom reports for analysis. Test and Candidate Administration: Allows for scheduling exams for individuals or batches along with proctor keys.
The above question types can be presented to the user in different presentation styles. The assessment engine maintains two sets of information for the questions, one is the type of question it represents and the other is the corresponding presentation format. Additional question types can easily be incorporated into the engine depending on requirements of the organization/university. Practice and Certification Tests A Test can be defined as a Practice Test or a Certification Test. Practice tests enable the user to check their understanding of the subject and pursue remediation depending on the feedback. Feedback is displayed for each question in a practice test. Certification is an acknowledgment of the skills possessed by an individual. It involves evaluating the skills a person possesses and providing a result along with feedback for the same. Tests can be configured in both static and randomized modes. In a static test, all candidates taking the test get the same set of questions. In a randomized test, the question set presented to each candidate is unique. The assessment engine can pick up questions for a test based on the test configuration from a large question bank. The system allows creating sections within a test. Sequences within a section can be predetermined along with specifications for distribution of questions and online analysis based on statistics. It is also possible to configure sections so that a user is allowed/not allowed to move ahead to the next section without successfully completing a section. Review / Workflow Content is always reviewed before it is allowed for publication. CLiKS provides the Workflow module to facilitate the online review and correction of content before it is published. When a user creates a content piece such as an item or a test, the engine requires it to be reviewed and approved through a defined, (configurable) workflow process before it is published. After the initial creation, the content would then follow a specified path and go through the levels of review/edits. At each level, other roles have specific jobs to perform on the item. Reviewers can send back the item to the creator for changes. Alternately, items can be forwarded to the next level reviewer till their final approval. The Workflow module supports the 3Rs, Routes, Rules, and Roles. Route is the path that content takes while under going review. The path has levels in it, which are assigned to appropriate roles. Roles are the system roles, which are assigned to act upon the content. Rules are the conditions specified while setting up a workflow cycle to define the decisions and actions that a reviewer can take on the content at different levels.
Together these three R's facilitate the functioning of the workflow process in CLiKS.
Test Generation and Delivery When a candidate starts a test, a test configuration present in the appropriate test pack is randomly picked up and assigned to the student. Thereafter, the test-taker is delivered the test using the same set of questions. The system searches for questions in the Question Pool depending upon the configuration of the test assigned to the candidate. The system processes questions in batches from the generated questions pool and displays them on the screen. The system displays the first set of questions as soon as the Testing Engine selects them. While the test-taker attempts these questions, the engine keeps selecting further questions and caching them. Further questions are displayed to the test-taker from this cache. This improves the performance, as the test-taker need not wait till the engine has selected all the questions for the test. Also, the system need not wait for the student to attempt the displayed question to fetch the next question from the question pool.
10
Test Instruction The Test starts with a set of instructions. The figure below, displays the attributes/parameters of the test as they appear on the instructions screen .
Question Pages Each question in the assessment is displayed on its own page, and each question page includes the following buttons: Review: To review and revise all questions in the section. Make sure that all questions are Attempted before pressing the End Assessment button. Section List: To move to the next section, once you have attempted all questions in the current section. Instructions: To return to the initial instructions page. Previous Question: To go back to the previous question. Submit Answer: To submit your answer and go to the next question. Skip Question: To skip the current question and go on to the next question. Un-attempted questions will be marked incorrect, so make sure that you attempt all the questions before clicking End Assessment. You can review and revise all questions in this section by clicking Review Questions. End Assessment: To be used when you have reviewed, answered and submitted an answer for every question of the assessment. You can confirm that all the questions have been attempted by clicking the Review Section button.
11
Question Types
The Assessment Engine has the ability to incorporate a diverse set of questions that would meet the requirements of most of the Instructional Design experts. Following question types are supported in the system through readily available templates: Multiple Choice Single Select Multiple Choice Multiple Select True and False Fill in the Blanks (Text and Numeric) Match the Following Free text/ Essay Type (Subjective response)
Review Questions The Review Questions page, as shown in the following figure, displays all the questions and indicates whether or not an answer has been submitted for each question. This page displays the status of all the attempts made/not made by the learner, and allows him to keep track of progress during the assessment.
12
Feedback and Hints The assessment engine enables a test creator to provide feedback on the responses of the learners. The feedback corrects the learners and reinforces the concept. The assessment engine also enables a test creator to provide hints to the learners while the learner is taking a test. In a certification test, it is possible to penalize learners if they make use of the hints. Marks Configuration This enables a test creator to configure marks at the time of test creation. The learners can either be marked in grades or percentage. The test creator can configure negative marks for each wrong response, full marks or no marks for a particular question, and the marks can be based on the rendering type of the question or on its difficulty level. This gives flexibility for the marking system and discretionary power to the test creator. Auto Grading A test creator can select auto grading of the test. In such a case, the test creator need not configure the marking at the time of creating a test. The learner will be automatically graded according to the default marks set in the system. Importing and Exporting Tests The Assessment engine is IMS and QTI compliant. Hence tests and questions can be easily imported from and exported to other systems that are standards compliant, promoting reuse of existing content. Proctoring In a certification test, the candidates need to take the test in a controlled environment where their identity is co-signed by the invigilating authority assigned for the test. The delivery engine has an interface for the proctor to co-sign candidates for the test. The proctors are assigned to the tests when the test is being scheduled. CLiKS provides the facility to proctor the test in the following mechanisms: Individual Co-signing In this form of proctoring, the assessment engine launches a screen to accept the login credentials as the candidates start a test. This is applicable only on the tests where proctoring has been enabled while configuring it.
13
Another mechanism of individual cosigning is by which an invigilator is provided a unique random key for every examination. The invigilator can write or announce the key to the students in the examination hall. Mass Co-signing or Group Proctoring Using this mechanism a proctor can co-sign a group or batch of students attempting a test from a central location using an interface provided by CLiKS. This interface provides a screen that displays the students taking an assessment. After the proctor has verified the student, all the students can be selected and co-signed.
Reporting
The purpose of Performance Tracker is to generate various reports to enable analysis of performance, evaluate the outcomes of students learning and, their assessments. It is also possible to generate administration reports for monitoring the usage and details of key information setup in the system. Individual Student Performance Chart This report shows the performance of a single student in a graphical format. The report shows the score of a student in various tests. This report enables to view and compare the performance of a student across various tests in a module. The intended end user of the report is the teaching staff, who can review the performance of a student in their own batches. Comparative Analysis This report shows the trend of the batch for test in a graphical format. The report shows how the students are performing in a particular test. This report enables to view and judge the average scoring ability of candidates. The report presents the score in blocks of 10 along the x-axis and the percentage of candidates who fall under each block along the y-axis. Individual Student Activity Details This report lists the details of activities for a candidate in a batch in a tabular format. The intended end user of the report is the teaching staff. The teaching staff can review the details of online tests taken by students. The report allows the teaching staff to choose a student from their batches and view the process. Performance CLiKS can support large number of concurrent users, enables streaming of digital content, supports concurrent streams, and manages bandwidth using defined policies and priorities.
14
time for the test, a link is displayed for the candidate to start the tests. The tests can be rescheduled for candidates who are unable to take the tests on the allotted time. The tests, after being configured and published can be delivered to the candidates using the delivery engine. The delivery engine receives the candidates responses and evaluates the results. There results are stored as a part of the student's assessment records. An important feature of assessment engine is its ability to keep track of individual question papers. A learner can abandon a test midway and can resume it at a later date. The engine keeps tracks of the questions given to the learner in the incomplete test and the options that had been marked by learner. Caching is another important feature of the Online Testing Engine. Questions and their attributes are downloaded and cached in the local machine while the person is answering the first question. This process is done in background without affecting the display. This improves system performance, as the learner need not wait for the questions to be downloaded. Time for the test is calculated as real time and it does not include the download time. This means that low or high bandwidth does not affect the time allowed to take the test. Extensibility CLiKS is a component-based system, which allows a CLiKS component to be replaced with any external system available in the market. The system is open to learning, collaborative, and knowledge base tools developed by other vendors. This is achieved by the implementation of open APIs. Any external system can interface with the existing components using the APIs provided for the component. Multilingual CLiKS supports multibyte UTF-8 character set that allows it to be used for most of the languages in the world, including Asian languages, such as like Japanese, Chinese, and Korean. The data stored in the database, field, labels, and error messages can be multilingual. High Availability The scalable architecture of CLiKS allows configuration for using more than one Application servers and Web servers. In such a case, the whole application will be available to users even when one of the servers is not available. Open API based Architecture CLiKS follows the open API based architecture, which allows easy integration with external systems in an enterprise. The robust design enforces that all the programs in any module can access the data of their own modules. For accessing the data of any other modules the published APIs are used. This allows any other application system to get/pass data to CLiKS modules by calling the published APIs. This allows rapid integration with existing applications in the organization. The system is also compatible with LDAP standards allowing it to be integrated with any other standard application to achieve Single Sign On through an enterprise portal. Ease of Customization CLiKS allows easy customization based on organizational needs. All static text in the system (screen titles, field labels, error messages, etc.) is retrieved from a multilingual file. This allows
15
for rapid modifications to screen names, field names, and other UI elements, thus eliminating the need for any re-programming. CLiKS is a rule-based system, where many rules can be defined for each function at the time of configuration. This once again offers tremendous flexibility during implementation without any need for re-programming. The look-and-feel and other changes in organizational policies can be effected in an extremely short duration. Technical overview of the CLiKS Assessment Engine: J2EE Application server (Pramati) Oracle RDBMS Scalable, load balancing ready, multilingual architecture. Fully Internet browser based Portable on all OS including Linux (Red Hat) Allows 3-tiered implementation where web server, application server & data base servers can be separated by firewalls.
Hosting
NIITs offers a fully hosted and managed eLearning services environment for production facility and appropriate infrastructure necessary for your technical requirements and business strategic goals. NIIT offers scalability, security and transparency of a hosted eLearning infrastructure, without requiring investment in hardware and software evaluation, deployment and maintenance. Features: 99.5% Server uptime guarantee 24x7 application and systems monitoring System, database and network administration Back-ups Redundant and high resilience Internet connectivity Security Fully managed staging environment Rapid application deployment Scalable storage Application Setup NIIT takes care of the installation of Hardware, software and the application. NIIT takes away your worry to Hire, train any additional IT staff to setup the software or configure the hardware. Availability NIIT provides 99.5% uptime for your eLearning application with an ability to continue uninterrupted services.
16
Test Delivery
Online Delivery
NIIT provides delivery of online tests to a number of its education and corporate customers around the world. These tests are scheduled and delivered at locations of choice of the customer. These tests are used for a variety of purposes: Education: 1. Entrance into a program 2. As part of courses, to aid in the learning process 3. Ongoing evaluation in various courses 4. Final exams/graduation Corporate 1. For hiring new employees 2. Assessing the impact of training programs 3. Pre-promotion assessment of abilities and behavioral aspects
Testing Centers
NIIT is setting up a network of dedicated testing centers across India to conduct tests in large numbers to manage scale for its customers. These centers, besides providing a large facility in various cities for conducting over 1,000 tests per day each also have other features for doing preassessment screening and post-test interviewing. The testing centers also have the capability for conducting various new types of assessments including automatically assessing language, voice and accent skills through patent-pending applications developed by NIIT.
Proctoring
NIIT provides a proctored environment for conducting high-stakes assessments in its dedicated testing centers. These centers are equipped with closed-circuit cameras and physical proctoring by trained proctors.
17
Assessment Solutions
Psychometric Analysis
Psychometric Analysis is an integral part of NIITs assessment solution. Psychometric analysis is performed at two levels: After field testing: in order to evaluate item performance and for (re)calibrating items for difficulty. This forms an integral part of item authoring process. On an ongoing basis: to continuously evaluate performance of tests and individual items within the tests.
18
Point-Biserial and Extreme Group Methods are commonly used approaches to arrive at the Discrimination Index for individual items. Besides providing a detailed reports on discriminative power of individual items, interpretative comments on the psychometric implications of discrimination indices of each item are shared with test developers to facilitate the test review process. Distractor Efficiency Analysis In the case of tests that use multiple-choice items, the incorrect answer choices to a question (distractors) play an important role. A good distractor, which is unambiguously incorrect and yet can confuse the less knowledgeable test taker, adds to the discrimination value of the question. The effectiveness of distractors is analyzed by measuring the distribution of responses across all the distractors. Distractors that are not selected at all or often enough by test-takers may be discarded/ modified/ replaced to improve the efficiency of the item. Point-Biserial Method is used to compare between the group that chooses a distractor and the group that chooses the correct option. A distractor efficiency matrix is prepared for all items in a test and recommendations provided to the item developers. Internal Consistency Analysis Items in a test must have internal consistency in measuring the proposed construct or variable. That is, items that are chosen for a test, designed to measure a particular ability or trait, must assess only that ability or trait and, therefore, have high correlation among themselves and with the test. In effect, information from such analysis is essential to know the internal structure of the test and to make decisions on the need to further enhance the quality of the test. With high internal consistency indices, the test developer can confidently rely on the tests ability to assess the proposed ability or trait. The internal consistency of a test is measured by means of different statistical tools. Common tools used to analyze internal consistency are Cronbachs Alpha Technique, KR20 Coefficient and Spearman-Brown Alpha. Test Reliability Analysis A good test needs to be consistent in its performance. That is, the same test given to a candidate today should produce identical results a few weeks or months later when the candidate takes it again. In a similar way, two or more parallel tests that assess the same ability or trait must show similar/identical results. In the psychometric analysis of ability/personality/skills tests, two different methods are used to evaluate the reliability of tests - Test-Retest Reliability analysis and Alternate-Form or Split-Half Reliability Analysis. Test Validity Analysis It is absolutely essential that a test measure what it was originally supposed to measure. At various stages of its development, a test needs to be evaluated for its validity. Validity of a test is assessed in different ways: Face validity, content validity, concurrent validity, predictive validity, and construct validity. Standardization and Norm Setting The process of standardization of a test ensures the representativeness of the test to the target audiences. It enables the test to be administered and scored under uniform conditions so that the
19
test can produce comparable results across different situations and target audiences. As part of the process, norms and benchmarks for different groups and situations are set for an unbiased comparison of individual scores on the test. According to the specific requirements of the customer, the team of psychometricians create standard rules for the administration, scoring and interpretation of the test. Standardized scores, such as percentiles, T-scores and STEN scores, and group-specific comparison norms or benchmarks are developed for accurate interpretation of individual test scores.
20
Other Services
NIIT provides a range of other services related to assessments to its customers around the world.
Technical Support
NIIT provides technical support to its customers and test delivery partners to help resolve any technical query.
Helpdesk
Since many test-takers are still taking online tests for the first time, they may need some handholding. NIIT has 24x7 toll-free helpdesks that provide this service to our test sponsors.
Mentoring
When tests are used as a practice or preparation mechanism, test-takers need to discuss their performance with someone who is familiar with both the test items as well as the subject area. NIIT provides this service to several of its customers through phone, email and chat.
Test-Process Auditing
Some of our customers use our technology to deliver certification tests through a network of their partners/franchisees. In such cases NIIT provides a service to audit the test delivery process through scheduled visits as well as through mystery shoppers at test delivery locations. This helps our customers to maintain a high level of integrity in their testing process.
Pre-test Screening
For certain kinds of tests, e.g., pre-hire assessments, test sponsors may have a set of criteria for screening candidates for eligibility for the test. NIIT provides both - manual as well automated options for screening.
21