Assessments should be fair, flexible, valid, and reliable as follows:


Fairness: Fairness requires consideration of the individual candidate’s needs and characteristics, and any reasonable adjustments that need to be applied to take account of them. It requires clear communication between the assessor and
the candidate to ensure that the candidate is fully informed about, understands, and is able to participate in the assessment process, and agrees that the process is appropriate.


It also includes an opportunity for the person being assessed to challenge the result of the assessment and to be reassessed if necessary.

 

Flexible: To be flexible, assessment should reflect the candidate’s needs, provide for recognition of competencies no matter how, where or when they have been acquired, draw on a range of methods appropriate to the context, competency
and the candidate, and support continuous competency development.

 

Validity: There are five major types of validity: face, content, criterion (i.e. predictive and concurrent), construct and consequential. In general, validity is concerned with the appropriateness of the inferences, use and consequences that result
from the assessment. In simple terms, it is concerned with the extent to which an assessment decision about a candidate (e.g. competent/not yet competent, a grade and/or a mark), based on the evidence of performance by the candidate, is
justified. It requires determining conditions that weaken the truthfulness of the decision, exploring alternative explanations for good or poor performance, and feeding them back into the assessment process to reduce errors when making inferences
about competence.

 

Unlike reliability, validity is not simply a property of the assessment tool. As such, an assessment tool designed for a particular purpose and target group may not necessarily lead to valid interpretations of performance and assessment decisions if the tool
was used for a different purpose and/or target group

 

Reliability: There are five types of reliability: internal consistency; parallel forms; split-half; inter-rater; and intra- rater. In general, reliability is an estimate of how accurate or precise the task is as a measurement instrument. Reliability is
concerned with how much error is included in the evidence.

 

Following is a guide to what should be in the assessment tools to meet the “Principles of Assessment”:

  • Elements addressed (to levels as defined in performance criteria).
  • Knowledge evidence/required knowledge addressed.
  • Performance evidence/required skills addressed.
  • Assessment conditions/critical aspects of evidence addressed.
  • Context and consistency of assessment addressed to appropriate AQF level.
  • Assessment of knowledge and skills is integrated with their practical application.
  • Assessment uses a range of assessment methods.
  • Criteria defining acceptable performance are outlined for all instruments.
  • Clear information about assessment requirements is provided (for assessors and students).
  • Allows for reasonable adjustment and provides for objective feedback.
  • Considers dimensions of competency and transferability

 

Rules of evidence are closely related to the principles of assessment and provide guidance on the collection of evidence to ensure that it is valid, sufficient, authentic and current as follows:

 

Validity: Assessment evidence considered is directly relevant to the unit or module’s specifications. Sufficiency: Sufficiency relates to the quality and quantity of evidence assessed. It requires collection of enough
appropriate evidence to ensure that all aspects of competency have been satisfied and that competency can be demonstrated repeatedly. Supplementary sources of evidence may be necessary. The specific evidence requirements of each
unit of competency provide advice on sufficiency.


Authenticity: To accept evidence as authentic, an assessor must be assured that the evidence presented for assessment is the candidate’s own work.


Currency: Currency relates to the age of the evidence presented by candidates to demonstrate that they are still competent. Competency requires demonstration of current performance, so the evidence must be from either the present
or the very recent past.

 

Following is a guide to what should be in the assessment tools to meet the “rules of evidence”:


• Assessment evidence considered has direct relevance to the unit or module’s specifications.
• Sufficient assessment evidence is considered to substantiate a competency judgement.
• Assessment evidence gathered is the learner’s own work.
• Competency judgements include consideration of evidence from the present or the very recent past.


In order to ensure that assessment activities/tasks meet the principles of assessment and the rules of evidence requirements, which includes meeting workplace requirements and to ensure the reliability and flexibility of assessment, all assessment activities/tasks must be validated

 

Have further questions? Contact us on:

📞 1300 380 681

✉️ support@iap.edu.au