International Test Comission, ITC, håller vartannat år sin återkommande konferens, i år i Spanien, San Sebastian. Runt 600-700 testintresserade personer träffas i San Sebastian för att dryfta tingen. Tidigare ITC-konferenser har varit intressanta, men det här året tycker jag att programmet ser än mer lovande ut. De allra flesta deltagarna är forskare eller representanter från testförlag och/eller konsultverksamheter. De två sistnämnda kategorierna är också representerade i den mycket intressant utställningen. Programmet innehåller traditionsenligt både symposium och posters. Själv kommer jag hålla tre presentationer som jag sammanfattar nedan. Det är en sammanfattning vad jag pysslat med de sista tiden.
A good practice example using ISO 10667 to implement mechanical data combination in an assessment center.
Anders Sjöberg & Eva Bergvall
ISO 10667; Assessment service delivery: procedures and methods to assess people in work and organizational settings, is an international standard covering procedures and methods in assessment within a workplace settings. Assessment Center (AC) can be an example of a process description of a method that is included in ISO 10667. AC is designed to measure multiple dimensions (e.g., problem solving and interpersonal skills) and predict future performance on the job. Although AC:s dimensions shows an incremental validity over and above psychometric tests such as cognitive ability and personality, the decision making process is often based on discussion among raters, which in the research literature is called clinical combination of data (Meehl, 1954). For prediction purposes, instead mechanical data combination (e.g., statistical combination of data) has proven superior to clinical combination for a wide range of criteria (Sarbin, 1943; Gough, 1962; Meehl, 1965, 1967; Sawyer, 1966; Goldberg, 1968; Sines, 1970; Dawes et al.,1989; Grove & Meehl, 1996;; Grove, Zald, Lebow, Snitz, & Nelson, 2000) including job performance (Kuncel, Klieger, Connelly, & Ones, 2013), which is the criterion for an AC (Feltham, 1988). However, clinical combination of data is still the predominating approach in practice, AC is no exception (Dilchert & Ones, 2009). Based on research and the formulation in the ISO standard about decision making based on multiple dimensions in a AC:s, this study shows how to implement an Evidence based AC (EAC), using mechanically combined AC ratings to predict managerial performance.
Anders Sjöberg & Gerhard Wolgers
Military pilot selection historically has been an area of great research effort due to the enormous cost for each candidates failure in the pilot training (Hunter & Burke, 2002). Research support the use of different types of cognitive tests to predict the future performance of pilots (Martinussen & Torjussen, 1998). However, in practice, the cognitive test is not used in isolation, instead an interview i often conducted with each candidate before the selection decision is made about who is best suited to begin the pilot training. Few studies studies have examined the incremental validity using interview in addition to cognitive tests in pilot selection. The purpose of this study is to evaluate a cognitive test battery along with the scoring of a quasi-structured interview, currently used for pilot selection to the Swedish Air Force. Firstly, to estimate the correlation between the interview and the sum score of the cognitive battery, data from the the Swedish Pilot Selection Database (SPSD, N=899) was used. Secondly, a small-scale meta analysis (k=2; N=699) of primary validation studies was conducted to estimate the correlation between the predictor scores (i.e., cognitive ability and interview) against a criteria of pilot performance collected during the basic training period. Finally, a stepwise regression analyses were conducted to answer the question about incremental validity. Results show that the interview score add 14% incremental validity over the cognitive ability score, while cognitives ability score add 10% incremental validity over the interview score. The utility to use both cognitive tests and interviews in future pilot selection are discussed.
Advances in Computer and Internet Testing: Implications for revising the ITC Guidelines
Panel Chair: Dave Bartram, CEB, UK
Panellists: Iain Coyne, University of Nottingham, UK; Ben Hawkes, IBM-Kenexa, UK; Annalisa Rolandi, UTilia, Italy; Anders Sjoberg, University of Stockholm, Sweden; Nancy Tippins, CEB, USA
In July 2005, the ITC launched their International Guidelines for Computer and Internet-based testing. Aimed at test publishers, developers and users, the guidelines have become internationally recognised in highlighting good practice issues in computer-based and Internet-delivered testing and have raised awareness among all stakeholders in the testing process of what constitutes good practice. Although the Guidelines have been well-received and are making an impact both in research and practice, there is recognition that a rapid developing area such as Internet testing requires the need for regular updating of the Guidelines. For example, advances in the use of mobile devices, video game techniques, avatars and online monitoring or proctoring are not fully reflected within the current Guidelines. In addition, since they were published in 2005 we have seen the publication of the ISO Standard (ISO 10667) on Assessment Service Delivery: “Procedures and Methods to Assess People in Work and Organizational settings”. This provides a potential overarching framework within which to locate guidelines focused on more specific assessment issues.
This panel discussion is the starting point for the revision of the Guidelines and will consider issues the revised guidelines need to address. The panellists include the original guidelines’ authors and others, all of whom have expertise in the science and practice of computer-based and Internet delivered testing. They will provide a brief statement of points they see as important and will debate current issues in Internet testing and those likely to emerge in the future. The session will encourage interaction and comment from the audience.
Ultimately, by understanding the issues which need to be incorporated into a set of revised guidelines, the ITC can ensure the guidelines continue to be an internationally recognised resource on best practice.
The International Test Commission (ITC) is an “Association of national psychologicalassociations, test commissions, publishers and other organizations committed to promoting effective testing and assessment policies and to the proper development, evaluation and uses of educational and psychological instruments” (ITC Directory, 2001). The architect of the ITC was Jean Cardinet, who worked on its formation from the mid 1960s until 1972. The ITC was formally established in 1978 under its first president, Ype Poortinga. Currently, the ITC has 20 Full Members (national professional psychological associations), 46 Affiliate Members (other test commissions, publishers and research organizations involved in testing), and over 150 Individual Members (individuals working or with an interest in tests and testing). Its current membership covers most of the Western and Eastern European Countries and North America, as well as some countries in the Middle and Far East, South America and Africa. It is registered as a not-for-profit organization and is affiliated with the IAAP and the IUPsyS.