Ad Code

Ticker

6/recent/ticker-posts

Explain the technical criteria before which the assessment techniques are considered scientific.

 Assessment techniques must meet four technical criteria before they can be considered scientifically acceptable measures of individual differences in people’s enduring qualities. These criteria are standardisation, norms, reliability, and validity.

1. Standardisation

A key concept in the measurement of personality dimensions is that of standardisation. This concept refers to the uniform procedures that are followed in the administration and scoring of an assessment tool. For instance, in selfreport scale, the examiner must make every effort to ensure that subjects read and understand the printed instructions, respond to the same questions, and stay within any stated time limits. It also involves information (in the manual) about the conditions under which the assessment test should or should not be given, who should or should not take the test (sample group), specific procedures for scoring the test, and the interpretative significance of the scores.

2. Norms

The standardisation of a personality assessment test includes information concerning whether a particular “raw score” ranks low, high, or average relative to other “raw scores” on the test. Such information, called test norms, provides standards with which the scores of various individuals who take the test later can be compared. Usually, the raw scores on a test are converted into percentile scores, which indicate the percentage of people who score at or below a particular score. Thus, test norms permit the comparison of individual scores to a representative group so as to quantify the individual’s relative rank standing to others.

3. Reliability

Any test whether personality or intelligence or aptitude etc., should have reliability and this should be demonstrated. Reliability means that repeated administrations of the same test or another form of test should yield reasonably the same results or scores. Thus, reliability refers to the consistency or stability of an assessment technique when given to the same group of people on two different occasions. This kind of reliability is termed as test- retest reliability (Anastasi, 1968).

To determine test- retest reliability, the scores from the first administration are correlated with those of the second by a simple correlation procedure. The magnitude of resulting correlation coefficient gives us an estimate of the test’s consistency over time. Although there are no fixed guidelines about acceptable levels of reliability, the reliability coefficients for most psychological tests are above +.70. The closer this statistic approaches +1.00, the more reliable the test is. In other words, when retested, people’s scores should match their first scores quite closely.

A second kind of reliability is determined by splitting the test into two sets (e.g., odd-numbered items versus even- numbered items), summing people’s scores for each set, and correlating the two sets of summed scores with each other. The correlation between these sets is termed split- half reliability and reflects the test’s internal consistency. If the composite set of test items is consistently measuring the same underlying personality dimension, then people who score high on odd items should also score high on even items, and people who score low on odd items should also score low on even items (again reflected in a high positive correlation).

A third type of reliability is based on the correlation of two versions of the same test (made up of similar items) administered to same group of individuals. If the scores on these different forms are about the same, the test yields reliability of parallel forms. In such a case, the correlation of two parallel forms would indicate that the items on both tests measure the same thing.

Lastly, reliability also applies to the degree of agreement between two or more judges in scoring the same assessment test. This is called inter scorer reliability, and must be demonstrated whenever scoring involves subjective interpretations, such as those made by personologists examining projective data. Inter scorer reliability tends to be especially low with qualitative data in general, such as interview conversations, dream reports, and other open ended response formats that are not objectively quantified. But, agreement is increased when judges use manuals with explicit scoring rules and instructions for analysing such data (Yin, 1984).

Validity

Whether or not a test measures what it is intended to measure or predicts what it is supposed to predict, is known as validity. It is another significant concept in personality assessment. There are three main types of validity: (1) Content validity, (2) Criterion- related validity, (3) Construct validity.

To be content valid, an assessment tool must include those items whose contents are representative of the entire domain or dimension it is supposed to measure. For instance, a personality test measuring shyness, should actually reflect the personal (“Is your shyness a major source of personal discomfort?”), Social (“Do you get embarrassed when speaking in front of a large group?”), and cognitive (“Do you believe that others are always judging you?”) aspects of shyness. A content valid test would assess each of these components defining the construct of shyness. Content validity is almost entirely determined by agreement among experts that each item does in fact represent aspects of the variable or attribute being measured.

For criterion related validity, personality assessment is commonly undertaken for the purpose of making predictions about specific aspects of an individual’s behaviour. For example, the behavioural criterion being predicted may include academic performance in management school, occupational success. The extent to which a test accurately forecasts some agreed- upon criterion measures determined by correlating subject’s scores on the test with their scores on independently measured criterion. For instance, the criteria is success in management school as measured by management school grade point average (GPA).The Common Aptitude Test would be validated if it accurately predicted the criterion( management school GPA).

For PDF copy of Solved Assignment

Any University Assignment Solution

WhatsApp - 8409930081 (Paid)

Post a Comment

0 Comments

close