
Competency-based training requires learners to demonstrate the skills and knowledge necessary to perform specific tasks. Each unit of competency must be assessed in a way that captures every element, performance criteria, performance evidence, and knowledge evidence in a real or simulated environment according to the conditions of assessment.
Assessment evaluates three dimensions: skills, knowledge, and attitude. Capturing all three becomes significantly more challenging when training moves online.
Face-to-face instruction allows facilitators to manage classrooms using visual cues. A furrowed brow, a confident nod, or a hesitant posture all communicate learner comprehension without a word being spoken. Online environments diminish these signals considerably.
Even with video conferencing platforms where all participants are visible, facilitators juggle multiple demands simultaneously: delivering content, sharing screens, monitoring chat sections, and responding to questions. This divided attention makes gauging learner attitude and comprehension more difficult. Subtle cues indicating whether candidates feel comfortable performing required tasks—or whether they need additional direction—often go unnoticed.
The situation worsens when learners turn off their cameras. Without visual confirmation, facilitators cannot be certain participants remain present and engaged.
No single assessment task captures skills, knowledge, and attitude simultaneously. Effective online assessment requires understanding the difference between formative and summative assessment, selecting appropriate tools for each, and implementing clear processes that ensure evidence collection meets compliance requirements.
Understanding The Difference Between Formative and Summative Assessment
Online assessment relies on two complementary approaches: formative and summative assessment. Understanding the difference between formative and summative assessment is fundamental to effective assessment design.
Formative assessment occurs throughout the training process. Its purpose is diagnostic—identifying learner strengths, weaknesses, and areas requiring improvement. Formative assessment enhances learning by providing ongoing feedback that guides both learner and facilitator.
Examples of formative assessment include knowledge questions after short learning segments, quizzes, verbal questioning, critical incident evaluations, and discussion forums. These activities check understanding incrementally rather than waiting until training concludes.
Summative assessment occurs at the end of learning. Its purpose is evaluative—determining whether candidates have met the requirements of the unit of competency. Evidence gathered through summative assessment enables assessors to make final judgements about learner competence.
The difference between formative and summative assessment lies in their complementary functions. Formative assessment identifies problems early enough for correction. Summative assessment confirms that learning objectives have been achieved. Effective online assessment programmes incorporate both, using formative methods to keep learners on track and summative methods to verify competency.
This distinction matters particularly for online delivery, where the physical separation between facilitator and learner makes ongoing progress monitoring more difficult. Without deliberate formative assessment, problems may not surface until summative assessment reveals gaps too late for easy remediation.
Types of Evidence in Online Assessment
Assessment evidence falls into three categories, each presenting distinct challenges for online collection.
Direct evidence can be observed or witnessed by the assessor. This includes observation of workplace performance, oral questioning, demonstration, and challenge tests. Direct evidence provides the strongest confirmation of competency because assessors witness performance firsthand.
Collecting direct evidence online requires creative solutions. Video conferencing enables real-time observation and questioning. Learner-created videos can document demonstrations. However, the assessor’s ability to control conditions and verify authenticity diminishes compared to face-to-face observation.
Indirect evidence consists of candidate work that assessors can review or examine. This includes finished products, written assignments or tests, and portfolios of previous work. Indirect evidence demonstrates outcomes without requiring assessor presence during task completion.
Online environments handle indirect evidence relatively well. Learning management systems facilitate submission, storage, and review of written work, images, and documents. The primary challenge involves verifying that submitted work represents the learner’s own capability.
Supplementary evidence provides additional support for competency claims. This includes supervisor reports, colleague or client feedback, employer testimonials, work diaries, and training records. Supplementary evidence corroborates direct and indirect evidence rather than standing alone.
Gathering supplementary evidence online requires establishing communication channels with third parties and implementing verification processes to confirm testimony authenticity.

Tools and Methods for Formative Assessment Online
Formative assessment keeps learners engaged and identifies comprehension gaps before they become obstacles to competency. Recognising the difference between formative and summative assessment ensures appropriate tool selection. Several tools support effective formative assessment in online environments.
Knowledge questions after learning bursts reinforce content in manageable segments. H5P enables embedding interactive questions directly within learning management systems. Google Forms provides a simple alternative for creating quick comprehension checks. Both approaches give immediate feedback, allowing learners to identify and address misunderstandings promptly.
Quizzes create engaging, interactive experiences while providing facilitators with immediate insight into learner progress. Kahoot gamifies the quiz experience, generating energy and competition that maintain engagement. Poll Everywhere offers similar functionality with additional polling features. Google Forms and H5P also support quiz creation with varying degrees of interactivity.
Verbal questioning requires synchronous communication platforms. Zoom, Microsoft Teams, Skype, and Google Meet all enable real-time conversation where facilitators can pose questions and evaluate responses. Verbal questioning becomes particularly valuable when cameras are off—calling on individual learners to respond confirms presence and engagement while assessing understanding.
Critical incident evaluations ask learners to analyse specific situations and explain their responses. Google Docs supports collaborative evaluation where multiple learners contribute to analysis. Tools like iAuditor provide structured frameworks for systematic incident review.
Discussion forums demonstrate engagement from both learners and facilitators while creating documented evidence of participation. Forums can be embedded within learning management systems, established as Facebook Groups, or created using tools like Padlet or Google Spaces.
Effective discussion forums require active facilitation. Without regular facilitator participation, learners rarely return to check replies, and discussions stagnate. Clear netiquette guidelines help learners understand that their responses are visible to all participants.
Collaborative activities build engagement through shared work. Learning management system wikis allow multiple contributors to build collective knowledge resources. Padlet provides visual collaboration spaces where participants can add text, links, videos, files, and images to shared boards.
Tools and Methods for Summative Assessment Online
Summative assessment determines competency achievement and requires evidence that meets compliance standards. Multiple methods support summative assessment in online environments.
Knowledge assessments verify understanding of required content. Question banks can be created in H5P or within learning management system quiz functions. Kahoot supports assessment in challenge mode, giving learners extended time to complete independently. Google Forms offers straightforward quiz creation. Word documents remain viable for assessments requiring extended written responses, with learners submitting directly through learning management systems.
Built-in questions within learning platforms require marking against assessor guides. Self-correcting formats like multiple choice provide efficiency but offer limited insight into deeper understanding.
Portfolios, journals, and blogs collect evidence over time, demonstrating development and reflection. Learning management systems accommodate portfolio uploads. Platforms like Moodle include blog functions accessible through learner profiles. Edublogs provides privacy controls appropriate for educational contexts, allowing teachers to manage access and create classroom spaces for shared reflection.
Portfolio and blog tools require explicit instruction. Allocating class time for setup ensures learners can ask questions and receive support before working independently.
Product-based assessment evaluates tangible outputs. Photographs documenting both process and final product can be uploaded or submitted, provided authenticity can be verified. Learners must sign declarations confirming work as their own.
Supporting materials strengthen product-based assessment. Planning documentation, process explanations, and reflective commentary demonstrate understanding beyond the finished item. Video blogs and podcasts offer alternative formats for learners to explain their work.
Third-party evidence becomes valuable when assessing task planning, task management, and job environment skills. Supervisors, colleagues, or clients can provide testimony that learners complete tasks across different contexts and navigate workplace requirements effectively.
Third-party evidence requires verification. Learners must be safe and properly supervised before third-party testimony is gathered. Following up with third parties via phone confirms testimony authenticity.
Observation can be conducted through learner-created videos. Clear instructions must specify required duration, necessary details, and supervision requirements. Task instructions should identify key criteria the video must capture.
Video quality concerns should be addressed explicitly. Learners need to understand that evidence gathering matters more than production values. Point-of-view glasses offer an innovative approach for capturing task performance from the learner’s perspective.
Scenarios and case studies assess contextual knowledge through analysis of realistic situations. Written case studies or video scenarios present situations for learners to dissect and respond to. Complexity should reflect actual workplace demands rather than oversimplified situations.
Providing Clear Instructions for Online Assessment
Clear instructions determine whether learners can successfully complete and submit assessment tasks. Instructions must address three areas: completing the task, collecting evidence, and submitting evidence. Communicating the difference between formative and summative assessment helps learners understand what is expected at each stage.
Digital literacy cannot be assumed. Learners possess varying levels of comfort with online tools, and unfamiliarity creates barriers to demonstrating competency. Assessment design must account for this variation.
The most effective approach: show, explain, show again.
Creating instructional videos provides reusable resources for assessment guidance. Recording these videos in real-time during live sessions allows learners to ask questions while facilitators demonstrate processes. Screen sharing shows exactly how to access interactive tasks, navigate submission processes, and locate required resources.
Live instruction sessions create opportunities for immediate clarification. Recording these sessions produces resources learners can revisit when questions arise later. Over time, assessment submission becomes familiar, but initial guidance prevents frustration and failed attempts.
Submission instructions require specificity. If evidence must be submitted online, instructions should identify exactly where and how submission occurs. Vague directions create confusion and increase support requests.
According to guidance from the Australian Skills Quality Authority (ASQA), assessment tools should provide clear information enabling learners to understand what is expected and how to demonstrate their competency.
The Rules of Evidence: VACS
Online assessment must satisfy the same evidentiary requirements as face-to-face assessment. The four rules of evidence—Valid, Authentic, Current, and Sufficient (VACS)—apply regardless of delivery mode. Both formative and summative assessment must adhere to these standards, though their application differs.
Valid evidence demonstrates that learners have addressed all elements and performance criteria. Assessment tasks must reflect the skills, knowledge, and conditions of assessment described in competency standards. Evidence must demonstrate application in real or simulated workplace situations.
Validity requires careful assessment design. Each task must map to specific competency requirements, and evidence collection must capture all required components.
Authentic evidence confirms that submitted work represents the learner’s own capability. Authenticity presents the most frequently discussed challenge in online assessment. Without physical presence, facilitators cannot directly observe task completion.
Multiple strategies support authenticity verification. Learning management system submissions typically include declarations that learners have submitted their own work. Product-based assessments should require photographic evidence showing the learner completing all aspects of the work—not just the finished product. Written reflections explaining how tasks were performed provide additional verification. Video presentations demonstrating process knowledge confirm understanding beyond what copying could produce.
Current evidence demonstrates that learners possess skills and knowledge reflecting present capability and current industry practice. Evidence must not be outdated or represent previous competency that may have degraded.
Currency requires attention to both learner development and industry standards. Assessment tasks should reflect current workplace expectations, and evidence should demonstrate recent performance.
Sufficient evidence proves competence over time and across required repetitions. If competency standards specify that candidates must perform tasks a certain number of times, evidence must document each instance. Learner statements claiming completion without documentation do not satisfy sufficiency requirements.
Sufficiency requires explicit evidence collection. Assessment instructions must specify documentation requirements for repeated tasks, and submission processes must accommodate multiple evidence items.
Providing Effective Feedback Online
Timely, constructive feedback maintains learner engagement and supports improvement. The difference between formative and summative assessment shapes how feedback is delivered—formative feedback guides ongoing learning, while summative feedback explains final judgements. Delayed feedback leaves learners uncertain about their progress and limits opportunities for correction before final assessment.
Self-correcting quizzes provide immediate feedback for formative assessment. True/false and multiple-choice formats offer instant results. However, these formats have limitations—answers may be guessed or obtained through multiple attempts. Self-correcting formats work well for formative purposes but provide limited summative value.
Assessment information should specify when assessment occurs, how feedback will be provided, and when learners can expect results. Setting clear expectations prevents anxiety and establishes accountability.
Written feedback works well for document-based submissions. Learning management systems typically include annotation functions for commenting directly on submitted work. Alternatively, facilitators can download submissions, add comments, and return marked documents. Word documents accept tracked changes and comments in specific locations requiring attention.
Screencast and video feedback suits learners who benefit from verbal explanation. Tools like Loom or Screencast-O-Matic enable facilitators to record their screen while providing audio commentary. This approach works particularly well for reviewing video submissions, allowing facilitators to pause, point, and explain as they view learner work.
Synchronous feedback provides real-time discussion through video conferencing. Individual meetings address specific learner needs. Group sessions replicate face-to-face feedback discussions, allowing broader conversation about common issues.
One week represents an acceptable maximum timeframe for feedback delivery. Longer delays diminish feedback value and leave learners waiting for results they need to progress.
Each assessment tool should require assessors to provide feedback regardless of outcome. Feedback explains why results were awarded, demonstrates fairness in the assessment process, and guides learners toward successful resubmission when required.
Keeping Learners Safe and Engaged Online
Online learning environments require explicit guidelines for behaviour and interaction. Clear rules protect learner wellbeing and create conditions for productive engagement.
Netiquette—the conventions of online interaction—should be explained early and reinforced consistently. Learners need to understand that their contributions are visible to all participants and that professional communication standards apply.
Involving learners in establishing rules builds ownership and commitment. Word cloud generators create collaborative activities where learners contribute suggested guidelines. The resulting visual can be shared to news forums or class blogs as a reference document.
Tool demonstrations reduce participation barriers. Learners who feel uncertain about technology may avoid contributing rather than risk embarrassment. Showing how tools work before requiring their use builds confidence.
Engagement monitoring requires deliberate strategies when visual cues are unavailable. Mandatory camera policies provide one approach, though they may not suit all contexts. Alternative strategies include calling on individual learners to respond verbally, requiring chat participation, or building interactive elements that require active response.
The goal is maintaining connection despite physical separation. Regular check-ins, varied interaction methods, and responsive facilitation help learners feel present and accountable.
Considerations Before Moving Assessment Online
Not all assessment translates effectively to online delivery. Careful evaluation should precede any conversion of face-to-face assessment tasks. Understanding the difference between formative and summative assessment helps determine which activities can move online and which require adaptation.
Some units cannot be assessed entirely online. Provide CPR, for example, requires access to adult and infant resuscitation manikins and training defibrillators as specified in assessment conditions. Expecting learners to access this equipment independently—even if they could create video evidence—presents significant practical barriers.
Current practice should be evaluated honestly. Converting existing face-to-face assessments to online formats may not produce optimal results. Some tasks benefit from redesign rather than direct translation.
Learning management system capabilities affect assessment options. Questions to consider include: How frequently will the LMS be accessed? How much time is allocated for online components? Does the LMS include embedded formative assessment within learning materials?
Organisations not using learning management systems need alternative evidence gathering and storage solutions. Whatever systems are employed, they must support the evidence types required and maintain appropriate security and accessibility.
Evidence collection methods, assessment approaches, and instruction clarity all require planning before implementation. Reactive problem-solving during delivery creates inconsistency and frustration for both facilitators and learners.
Conclusion: Making Online Assessment Work
Online assessment presents genuine challenges. The loss of face-to-face interaction removes familiar cues and comfortable processes. Evidence collection requires new approaches. Authenticity verification demands deliberate strategies.
However, online assessment also presents opportunities. Digital tools enable immediate feedback, flexible timing, and diverse evidence formats. Learners can demonstrate competency through methods that may suit them better than traditional approaches.
Success requires intentional choices. Selecting appropriate tools for formative and summative purposes, providing clear instructions that account for varying digital literacy, and delivering timely feedback that guides improvement all contribute to effective online assessment.
Transparency builds trust. When learners understand expectations—what they must demonstrate, how they will be assessed, when they will receive feedback—they can focus on learning rather than navigating uncertainty.
The distinction between formative and summative assessment provides a foundation for assessment design. Formative methods keep learners engaged and identify problems early. Summative methods verify competency achievement. Both are essential, and online environments support both when implemented thoughtfully.
Online assessment need not be a compromise. With appropriate planning and execution, it becomes an opportunity to engage learners in meaningful ways that complement and sometimes enhance traditional approaches.