This document introduces a global online learning program implemented for over 400 staff members of an international organization over two years. The program focused on updating participants' understanding of new assessment methods. It used a blended learning approach with an initial 8-14 week online phase followed by in-person sessions. The online phase took place entirely online using a learning management system and involved self-study materials, quizzes, and required participation in asynchronous discussion forums. Evaluations found the program was positively received and effective, with over 75% of participants passing each year, though facilitators could have taken a more active role in discussions.
2. Online Discussion Groups and Tasks: This part constituted the backbone of the entire e-Learning phase and
was subdivided into a public discussion forum and several private discussion forums. The public forum
facilitated the general exchange of knowledge across all participants. Participation herein was voluntary.
The private forums were part of separate “Learning Communities”, each consisting of about 15 randomly
assigned participants. These communities also contained asynchronous discussion forums, where
participants could openly discuss the content of the modules. In both instances, there were two different
types of forums available. One forum specifically focused on group building processes, where people
introduced themselves and conducted informal chit-chat. The other type of forum was content driven,
providing a platform to collaboratively work on practical, real-life tasks, which were taken from the actual
working environments of the participants. To facilitate the discussions a team of academic staff was
assigned to each Learning Community to guide the discussion, if necessary, and to act as a kind of
‘sparring partner’.
Final Assessment: The e-Learning phase was evaluated in equal parts on the basis of the participants’
contributions to the discussion forums, as well as a final exam. In 2006 this took the form of an extensive
online multiple choice test. Taking into account the limitations of such a summative assessment, the format
was adapted to essay questions in 2007.
End Evaluation
At the end of the e-Learning phase, an evaluation was conducted in order to assess whether the
participants’expectations and goals were matched, as well as to measure the overall success of the phase.
Overall, the e-Learning phase was very positively evaluated. More specifically, on a scale from 1 (very
bad) – 10 (very good), the overall quality received a 6.64 in 2006 and a 7.07 in 2007. Similarly, the
supporting staff was awarded with a 6.27 in 2006 and a 7.11 in 2007. When looking more closely at the
evaluations, one can see that, based on a Likert scale from 1 (strongly disagree) to 7 (strongly agree),
participants perceived the program to be a valuable learning experience (2006: 5.82 & 2007: 6.16) and
considered the structure to be good (2006: 5.23 & 2007: 5.44). Furthermore, participants really appreciated
the collaborative nature of the phase (2006: 4.28 & 2007: 4.66). Potential weaknesses were mainly
identified around the usage of the discussion forums. More specifically, although the facilitators were
evaluated quite positively (2006: 4.16 & 2007: 4.95), participants clearly indicated that they would have
liked them to take a more active role in the discussions (2006: 4.19 & 2007: 4.59). Another perceived
drawback of the phase was the estimated workload, as the average amount of hours spent on the e-Learning
phase was higher than expected (2006: 8.01 hrs & 2007: 8.20 hrs).
Overview of performance indicators
Overall, in 2006 75.80% and in 2007 83.90% of all participants successfully completed the e-Learning
phase, which are both very acceptable passing rates. The grades were determined on a scale from 1 (very
bad) –10 (very good), requiring at least 5.5 to pass the phase. When looking at the phase’s final exam
scores and grades, there is a noticeable difference between the average scores when comparing 2006 and
2007. As a first approximation it seems likely that this is related to the nature of the final exam, which has
been more practical and solely based on open questions in 2007. In order to roughly estimate possible
learning effects, a comparison of scores between the pre-knowledge test and the final grade was conducted.
To measure the possible effects a paired-sample t-test was employed, which yielded mixed results. For
2006, no significant difference between the two scores were determined. In contrast, when conducting the
test for 2007, a strong increase in the average scores, which is highly significant at a 0.01 level, was found.
This suggests that the changes made between 2006 and 2007 have had a considerable, positive impact on
the overall outcomes.
Amin, A., & Roberts, J. (2006). ‘Communities of Practice: Varieties of Situated Learning’. Paper presented
at the EU Network of Excellence Dynamics of Institutions and Markets in Europe (DIME).
Retrieved on February 18, 2008. From http://www.dime-eu.org/files/active/0/Amin_Roberts.pdf.
Gannon-Leary, P., & Fontainha, E. (2007). Communities of Practice and virtual learning communitites:
benefits, barriers and success factors. eLearning Papers, 5(September 2007).
Lave, J., & Wenger, E. (1991). Situated learning: legitimate peripheral participation: Cambridge
University Press.
3. Savery, J. R., & Duffy, T. M. (1995). Problem Based Learning: An instructional model and its
constructivist framework. Educational Technology, 35, 31-38.
Woods, R., & Ebersole, S. (2003). Becoming a "Communal Architect" in the Online Classroom -
Integrating Cognitive and Affective Learning for Maximum Effect in Web-Based Learning,
Online Journal of Distance Learning Administration.