Educational innovation, development and discussion at ETH

How meaningful are clicker data?

Contributors: Meike Akveld (D-MATH), Menny Aka (D-MATH), Alexander Caspar (D-MATH), Marinka Valkering-Sijsling (LET), Gerd Kortemeyer (LET)

Among other things, ETH Zurich’s EduApp allows instructors to pose clicker questions during lectures. Instructors can interrupt lectures to ask questions from the students and get and give feedback on learning progress. Lecturers can also trigger phases of peer-instruction, where students discuss their initial answers to a question with one another and then reanswer the question – in effect, the students are teaching each other during those phases, thus “peer instruction”. By asking students to answer a question twice, lecturers gather data on student understanding. But how meaningful is this feedback data, in particular, when answering is voluntary and ungraded?

A group of mathematics instructors at ETH’s D-MATH worked with LET to analyze EduApp data using Item Response Theory (IRT), Classical Test Theory (CTT) and clustering methods. Over the course of the semester, 44 clicker problems were posed – 12 of them twice, as the instructor decided to insert a phase of peer-instruction. The following figure shows an example of the kind of problem being analyzed:

Fig.1 Example of a clicker problem

The problem shown was used in conjunction with peer-instruction; the gray bars indicate the initial student responses, the black bars those after the discussion. A simple, unsurprising observation is that after peer-instruction, more students arrived at the correct answer. What can we learn from these responses? CTT and IRT can provide psychometrics that help understand this instructional scenario.

When it comes to being “meaningful,” the “discrimination” parameter of a problem is of particular interest: how well does correctly or incorrectly answering a problem distinguish (“discriminate”) between students who have or have not understood the underlying concepts?

CTT simply uses the total score as a measure of “ability”, but also has a measure of discrimination (“biserial coefficient”). IRT estimates the probability of a student arriving at the correct answer for a particular problem (“item”) based on a hidden (“latent”) trait of the student called “ability” – typically, higher-ability students would have a higher chance of getting a problem correct. How exactly this probability increases depends on problem characteristics (“item parameters”).

In IRT, the ability-trait is determined in a multistep, multidimensional optimization process, where the difficulty and discrimination parameters of particular problems (“items”) feed back on how much correctly answering that problem says about the “ability” of the student; “high-ability” students are likely to get correct answers even on high-difficulty, high-discrimination problems.

The results of their study were extremely encouraging: using both CTT and IRT, almost all 44 problems under investigation exhibited strong positive discrimination in the initial vote. This means that the better the student understood the underlying concepts, the much more likely they were to give the right answers – and vice versa. A low discrimination, on the other hand, means a problem provides less meaningful feedback. For the handful of problems which had lower (yet still meaningful!) discrimination, this could be explained by other problem characteristics, for example, that at the time they were posed, they were still too hard or already too easy – but even that feedback is meaningful to the instructor for future semesters.

The truly surprising result of the study was that in all cases of peer-instruction, the problem had even stronger discrimination afterwards! Yes, unsurprisingly more students answer correctly after discussion with their neighbors (the problem becomes “easier”), but: peer-instruction does not simply allow weaker students to enter the correct answer, it apparently helps them to perform at their true potential.

For the purposes of the study, the clicker data had to be exported manually, but the next version of EduApp, slated to be released in December 2020, will allow export of data for learning analytics purposes directly from the interface – the following figure shows a sneak preview of that new functionality.

Fig. 2 The new “Learning Analytics” function in EduApp

The exported data format is compatible with input for the statistics software R, and there are variety of guides available for how to analyze this data (https://aapt.scitation.org/doi/abs/10.1119/1.5135788 (accessible through the ETH Library) provides a “quick-and-dirty” guide).

The full study, including results from Classical Test Theory and clustering methods, as well an outlook for new EduApp-functionality is available open-access in Issue 13 of e-learning and education (eleed) under https://eleed.campussource.de/archive/13/5122.

Read more

Posted on ,

Virtual labs with Labster – Practical experience in food chemistry

Interview with Dr. Melanie Erzinger, responsible for the food chemistry practical course at D-HEST.

The food chemistry practical course at D-HEST uses the virtual laboratory simulations of Labster. Virtual labs allow students to complete laboratory experiments online and explore concepts and theories without stepping into a physical science lab. Especially now in times of Corona, this is a valuable alternative. There are many kinds of virtual lab simulations, from simple video animations to immersive 3D interactive learning environments. Other courses and departments might benefit from these options as well, mainly biology and chemistry.

Click to view demonstration video

What effects did Labster have on the practical course in food chemistry?

The use of virtual laboratories with Labster had positive effects on the practical course in food chemistry. They are now an integral part of the practical course and they were appreciated by the teachers and students. The fact that they are only available in English was accepted due to the clear added value. They have been in use there since 2017, in 2020 already in their fourth run. All 60 students received their own, fee-based semester licence.

The Labster activities enabled students to prepare for real laboratory experiments in a better and more exciting way than if they had only read an introductory document. Now they had to actively deal with a problem. This proved to be an excellent way of introducing them to methods.

What is the relationship between virtual and real world labs?

The virtual labs were never intended to replace the real labs, they are meant as an addition – to understand, practice and repeat more deeply. A three-stage setting was established which was successful because it formed a common thread: 1. preparations with Labster. 2. dealing with the topics in the Labster labs and doing the practical training in real life. 3. follow-up and repetitions with Labster. During the corona crisis, the practical middle part was replaced by virtual laboratories and so it was still possible to give the students good insights and knowledge.

What other advantages does Labster bring?

Another advantage is that Labster allows additional experiments to be carried out that would not be possible in real life. There are three reasons for this: It would take too long, the devices are very expensive and used by the researchers or the experiments would be too dangerous.

The technology has improved greatly over the years and today there are no longer any technical problems, as long as the students’ computers meet minimal and common technical standards and the technical support provided is used. With a detailed briefing at the beginning of the semester, the practical use of virtual labs works well and the Labster helpdesk is also working properly.

How could the use of Labster be expanded at ETH?

Other courses and departments would also benefit from virtual labs in certain areas (biology and chemistry, e.g. toxicology or molecular biology), and Labster has many topics and laboratories that would be a good addition. However, in courses with a low proportion of practical training, selective use of virtual labs would not really justify the expenditure of time and money, since – unlike in pure practical training – they would not be essential for the learning objectives.

More information about the use of labster at ETH can also be found here under “virtual labs”.

Labster-Website – to see the library of available simulations

Read more

Posted on ,

ETH Moodle App

We are proud to announce the brand new ETH Moodle App for Android and iOS available today! This app has been developed by the core developers of Moodle and is a specially branded version of the official Moodle App.

Please click on the link below to download the app:

iOS: https://apps.apple.com/app/id1521806822

Android: https://play.google.com/store/apps/details?id=ch.ethz.ethmoodle

Easy access, work offline and much more

Students and lecturers can access all their courses directly from their smartphone or tablet. This access has several advantages:

  • You only have to login once for days and weeks at a time.
  • You can download courses and access them offline.
  • If you post an answer in a forum or solve a quiz while offline, the course will be synchronised when you are online again.
  • You can include audio, video and pictures from your phone easily into your forum answers, messages and even assignment responses.
  • The app uses GDPR-compliant push notifications for important dates (yes the Moodle calendar is placed directly on the start screen), messages and forum posts.

Similar but not equal

Although your course looks similar in the ETH Moodle App, there are some important differences (especially important for lecturers to know):

  • There is no edit possibility to the course via app. So, if you want to edit your course, please use the web browser.
  • The app doesn’t display any blocks (which you can add to your course individually).
  • Some activities are not Moodle App ready yet (or not meant to work with the app). In those cases students (and lecturers) are forwarded to a web browser. At the moment the following activities are not app ready:
    • Interactive Video Suite
    • Student Quiz
    • OU Blog
    • Fair Allocation
    • Scheduler
    • Collaborative Folder

If you have any feedback on the ETH Moodle App, please contact us at moodle@let.ethz.ch.

Read more

Posted on ,

Drawing by hand made easy in Moodle

Do you already use Moodle and have you ever wanted better options for capturing simple digital drawings as part of a quiz? An improved freehand drawing question type is now available for ETH lecturers.

This question type is called “Freehand Drawing (ETH)” and works on any computer device. Generally, touch-devices with hardware pens (styluses) work best although drawing with a mouse or a touchpad works as well.

Creating a quiz with a Freehand Drawing question

Start by creating a quiz in Moodle. Quizzes can include any number of any question types.

Insert your new question into the quiz, using “+ a new question.”

Choose the question type “Freehand drawing (ETH)”.

Insert a title and write a task or a question for the students to solve. This is the standard text editor used all over Moodle. In this example, we have asked a question about charging a capacitor where we want the students to sketch the electric current as a function of time.

Note that a lot of students do not own large-screen touch-devices, so we cannot expect detailed or precise drawings. Use of the question type thus should be limited to “sketching” or “drawing,” not precise activities like “graphing.”

By default, students get a white background to draw on. You can also create and upload a background image using any drawing tool. In the following example, a coordinate system was created using PowerPoint. These drawings can help your students literally by framing their answers and in the end help you grading them by standardising aspects of the image.

Upload your background image to your question. You can simply drag and drop your file into the provided field.

If it is a big image, make sure to reduce the size so it fits on the student’s screen without scrolling. While the tool allows for horizontal or vertical scrolling, this can be awkward. Also, keep in mind that Moodle itself will take up some space for its navigational elements – a width of 500 pixels is reasonable.

Save and preview your question, using the “preview” button:


Answering a Freehand Drawing question

Students can then answer the question as shown in the example below:

This answer drawing was made with a mouse. The precision of this drawing is at the limit of what should be expected, but one can clearly see what’s going on: exponential drop-off, starting at U/R. For the axis label, students could also have used the typing tool. Other tools include a simple line tool and an eraser.

Students would then submit their drawings as their answer. Answers need to be graded manually, just like essays answers.

Practice makes perfect

Although more and more students possess convertible or tablet devices with touch screens and pens, drawings are still more often made using paper and pencil. If you intend to use Freehand Drawing on exams, it is important to give your students time and opportunity to practice using it. If you already use Freehand Drawing questions in your lecture during the semester, the students have had time to practice at their individual pace and can get acquainted with the question type.

This question type will be piloted for use in future examinations. The devices used for mobile exams at ETH have touch screens and hardware pens, which work well with this question type. If you are interested in using the Freehand Drawing question type in an online exam, please contact online-pruefungen@let.ethz.ch.

Read more

Posted on ,

Learning Autonomy with Self-Driving Cars: Duckietown goes MOOC.

Figure  1 Tani (left) and Censi (right) in the Duckietown Lab
(Picture: ETH, Alessandro della Bella)

Jacopo Tani and Andrea Censi are senior assistants in the research group headed by Emilio Frazzoli (D-MAVT), an internationally renowned specialist in autonomous systems. Together with Prof. Liam Paull of the University of Montreal, they lead the Duckietown project, which was conceived at the Massachusetts Institute of Technology (MIT) in 2015. The goal was to build a platform that was small-scale and cute yet still preserved the real scientific challenges inherent in a full-scale real autonomous robot platform.  Duckietown is now a worldwide initiative to realize a new vision for AI/robotics education. It teaches participants to programme autonomous vehicles to navigate a structured environment using rubber ducks as the passengers of the vehicles, and has now been used by over 80 universities in 23 countries worldwide. Their next endeavor is to create a series of massive open online courses (MOOCs) focused on the science and technology of autonomy through the lens of self-driving cars. In this multi-institution project, ETH will take leadership and develop the first course of the MOOC-series.

What will the MOOC be about and what do you seek to achieve for participants?

The Duckietown MOOC series will be about autonomy, or how to make machines take their own decisions to accomplish broadly defined tasks. This topic is both intellectually fascinating and very timely given the rapid progress of robotics and AI technologies in our daily lives. Autonomy will be studied through self-driving cars, an application with disruptive social potential.

Participants will engage in a sequence of software and hardware hands-on learning experiences whose particular focus is on overcoming the challenges of deploying robots in the real world. Our hope is that participants will gain useful skills and come to appreciate and understand the challenges of this technology, while at the same time having lots of fun!

What motivated you personally to make a MOOC?

The Duckietown project was developed to make the science and technology of autonomy accessible to the broadest possible audience, not only to those learners lucky enough to have access to premiere educational institutions where these topics are addressed. Building a Duckietown MOOC experience was a logical step towards achieving the mission of the project. We are grateful for ETH-Innovedum supporting our efforts and extremely excited to bring our vision for learning autonomy to the world.   

What are the unique didactic challenges?

Teaching autonomy requires a fundamentally different approach compared to many computer science and engineering disciplines. There is extensive and diverse preliminary knowledge needed to really comprehend autonomy from the “pure” mathematics and physics to “modern” machine learning based approaches. Moreover, robots are real world machines, and theory and practice do not always play well together. To see the theory work in the real world it is necessary to translate the knowledge in software architectures, and deploy them on hardware platforms. Finally, there is a proliferation of hardware platforms and software tools out there, each with its own peculiarities, strengths and shortcomings. It is not always clear what tools are worth investing time in mastering, and how this competence will translate to different platforms.   

 How will you overcome these challenges?

To address these barriers of entry to learning autonomy, the MOOC “Self-Driving Cars with Duckietown” will have several distinguishing features, namely:

  • Competency-based topic progression
    The sequence of topics in the courses is determined by asking the question: “what is the most we can make our robot do, with the least amount of prior knowledge?” instead of “what is the best order to explain things?”. As learners progress through behaviors of increasing complexity to reach the final objective, it becomes naturally necessary to introduce new concepts and tools to address limitations to previous behaviors. This approach allows students to jump right in “the middle of things” (getting Duckiebots to do things!) and gradually re-iterate concepts through the various technical frameworks and implementation solutions that are so very important to align the theory with the practice, leading to a stronger comprehension of the how and why things happen.
  • Hardware-based hands-on learning on a standardized platform (the Duckiebot) with open-source industry-widespread software tools
    This is a robotics and AI MOOC where every participant will have the opportunity to follow along by doing real world experiments with their own robot at home. The Duckietown framework was designed, from the software stack (i.e., Python, ROS, Docker) to the Duckiebot and Duckietown city, to make the course accessible for all learners, both pedagogically and economically.
  • Remote evaluations of hardware assignments
    The last, but not least, distinguishing factor of this MOOC is the use of remote facilities (the Duckietown Autolabs) where reproducible performance assessment of hardware assignments is conducted in controlled environments. This feature enables remote grading of hardware assignment, which, to the best of our knowledge, is a first ever for a robotics MOOC.

Like to know more about Autonomy with Self-Driving Cars? Course starts at January 15, 2021 and will be published on the edX-platform.

Inspired to start your own MOOC project? Please have a look at our website and contact Marinka Valkering to discuss possibilities!

Read more

Posted on ,

Safe Exam Browser 3.0 freigegeben

Prüfungen am Computer (Online-Prüfungen) bieten viele Vorteile gegenüber herkömmlichen schriftlichen Prüfungen. Sie können oftmals authentischer gestaltet werden, indem Studierende in der Prüfung mit Programmen und Ressourcen arbeiten können, die sie auch in den Übungen oder später im Arbeitsalltag verwenden (z.B. durch eine eingebettete Programmierumgebung). Je nach Aufbau der Prüfung ist auch eine (teil-)automatisierte Bewertung möglich, die bei grossen Kursen eine massive Arbeitserleichterung für die Examinatoren bringt, bzw. ein Assessment überhaupt erst möglich macht (z.B. bei MOOCs).

Voraussetzung für Online-Prüfungen ist eine abgesicherte Umgebung am Prüfungscomputer, die während der Prüfung nur den Zugriff auf erlaubte Ressourcen gestattet. Hier kommt die Open Source Software Safe Exam Browser ins Spiel, die seit über 10 Jahren existiert und inzwischen international von zahlreichen Bildungseinrichtungen eingesetzt wird. Der Safe Exam Browser ermöglicht nicht nur die kontrollierte Absicherung des Prüfungscomputers, sondern erlaubt auch das Verwenden ausgewählter lokaler Programme oder die Freigabe von bestimmten Webressourcen.

Für die Version 3 wurde der Safe Exam Browser (im Folgenden SEB abgekürzt) einem umfassenden Refactoring unterzogen. Der SEB wurde von Grund auf gemäss aktueller Standards neu programmiert. Die prinzipielle Funktionsweise wurde dabei beibehalten. Wesentliche Änderungen wurden «unter der Haube» vorgenommen und sind vor allem technischer Art.

So wird nun beispielsweise als Browser-Komponente anstelle der Mozilla Gecko Engine die Chromium Engine verwendet. Die Erkennung von virtuellen Maschinen als Laufzeitumgebung wurde verbessert. Es gibt erweiterte Funktionen zur Kontrolle der Browser-Session durch das Prüfungssystem (für die SEB-Moodle Deeper Integration). Und die Zusammenarbeit mit Antivirenprogrammen wurde weiter verbessert.

Aber auch die Benutzeroberfläche wurde modernisiert. Neu verfügt SEB über ein «Action Center», ähnlich dem aus Windows 10 bekannten Seitenmenü. Man kann nun beim Umschalten zwischen geöffneten Applikationsfenstern mit ALT-TAB  wie unter Windows üblich kleine Vorschaufenster der geöffneten Applikationen sehen. Ausserdem wird die Arbeit mit Tablet-PCs besser unterstützt.

Benutzeroberfläche von SEB 3 mit links eingeblendetem Action Center

SEB wird weiterhin in Versionen für Windows, MacOS und iOS angeboten. Und mittlerweile wird für Windows neben der 32-bit auch eine native 64-bit Variante bereit gestellt.

Der volle Funktionsumfang aus SEB 2.4.x wird mit der Version 3.2 zur Verfügung stehen. Dies betrifft etwa den Zugriff auf die Webcam durch Webapplikationen (mehr Details dazu in den Release Notes).

Zudem sieht die Roadmap für die Weiterentwicklung von SEB vor, das mitgelieferte Konfigurationstool in Übereinstimmung mit dem Konfigurationstool des SEB Server zu überarbeiten, sowie die Integration mit weiteren Learning Management Systemen zu verbessern.

Read more

Posted on ,

Engaging students through technology enhanced Feedback

Teachers’ written commentary on student assignments is a fundamental element of instruction in almost any discipline. However, it is unclear what impact the feedback has on students. Consequently, teachers face fundamental questions for which no ready answers are available: Which components of commentary are most helpful, and how are they most effectively delivered? How can students’ uptake of commentary be optimised, and how can teachers be most efficient when providing commentary?

Giving and receiving feedback with Edword

Edword is an assignment feedback tool that provides answers to these questions; it quantifiably improves the quality and efficiency of commentary and its uptake and measures previously unobserved aspects of learning. Edword is an online platform to which students can upload written assignments of any kind set by teachers.

Teachers´view of sample comments for a microbiology lab.

Teachers can then add commentary to the text. These comments can be written individually as done in many traditional teaching settings, but Edword also enables the rapid application of prewritten comments from comment sets. These comment sets can address any aspects of written work in any discipline. They can be prepared by teachers working individually or shared between colleagues in teams.

Students´view of feedback through a comment with additional video material.
Sample comment for a common mistake in lab reports.

Because they can be adjusted and augmented, a comment set can evolve as individual comments are added and improved. The quality and level of detail that can be delivered within the time available for commenting on assignments is thus substantially increased.

When the student opens the commentary, the most important comments, selected by the teacher, are presented first and repeated comments bundled so that the student sees every instance of the same comment in the assignment. This allows the teacher to optimise individual students’ uptake of their commentary. Edword enables further optimisation by measuring two aspects of students’ engagement with the comments: the time the student spends with each individual comment is automatically recorded, and the student gives one of three responses—helpful, neutral, or unhelpful—to each comment. These data points are automatically collated by comment and assignment to provide a fine-grained evidence base for further adaptation of comment sets and commenting practice to the specific requirements of programs and disciplines.

Successful pilot project

Edword’s suitability for use with UZH and ETH students was tested in a pilot project between February and May 2020. Writing instructors from the English unit of the Language Center of UZH and ETH Zurich (LC) attended a LET Refresh Teaching event at ETH on 4 September 2019 where selected EdTech startups from the Kickstart Accelerator program presented their tools; here, they were introduced to the Edword online writing assessment tool. Seeing its potential, four writing instructors collaborated with LET and ran a pilot project to test Edword in their courses comprising 167 students in all. The instructors created and shared comment sets containing a total of over 350 specialized comments. The participating students were surveyed online about their experience with Edword at the end of their courses (response rate 32%). Some 87% said they preferred commentary via Edword over traditionally delivered comments.

Potential for broader application at ETH

The feedback processes in Edword can be used to provide highly nuanced and sophisticated commentary for any kind of written assignment, and comment sets can be adapted to the demands of any discipline. The comment sets can be written centrally or developed collaboratively or individually, and the uptake of commentary is monitored in detail. Further test groups can demonstrate the range of academic contexts in which Edword is applicable. Please contact Melanie Walter if you are interested in trying out Edword in your ETH course.

Read more

Posted on ,

Online-Prüfungen reloaded

SEB Server Version 1.0 veröffentlicht

Die Nachfrage für die Durchführung von Prüfungen am Computer steigt beständig. Damit verbunden wachsen auch die Anforderungen an die technische Infrastruktur. Langjährig bewährt hat sich bereits der Safe Exam Browser, der eine abgesicherte Prüfungsumgebung auf zahlreichen Endgeräten ermöglicht. Ergänzend dazu wird aktuell an der ETH Zürich in der Abteilung LET (LINK) im Rahmen des Swiss MOOC Service Projekts der swissuniversities die Server-Applikation «SEB Server» entwickelt, um die technische Vorbereitung und Durchführung von Online-Prüfungen zu vereinfachen. Schlüsselfunktionen des SEB Servers sind die zentrale Konfiguration der SEB Clients für einzelne Prüfungen, sowie die einfache Prüfungsadministration durch eine direkte Anbindung an betreffende Learning Management System (aktuell für Open edX innerhalb des Swiss MOOC Services). Ausserdem ermöglicht SEB Server eine zentrale Steuerung und Überwachung der SEB Clients während der Durchführung der Online-Prüfung.

Ein Einsatz des SEB Servers ist grundsätzlich in allen Online-Prüfungsszenarien sinnvoll. Vorteile bringt er insbesondere bei Remote-Prüfungen (z. B. bei MOOC-Kursen), aber auch bei Prüfungen in Räumen der Lehrinstitution auf privaten Endgeräten der Studierenden («bring your own device», BYOD). Hier bietet der SEB Server Funktionalitäten, die ansonsten nur schwierig oder gar nicht zu erreichen wären, beispielsweise  die automatische Verteilung der SEB Client Konfiguration oder das Monitoring der verbundenen SEB Clients während der Prüfungsdurchführung

Überwachung der Funktionsfähigkeit der SEB Clients druch den SEB Server in einer laufenden Prüfung
Überwachung der Erreichbarkeit der SEB Clients durch den SEB Server in einer laufenden Prüfung

Der SEB Server ist ein Open-Source-Projekt und basiert auf aktueller Java Enterprise- und Docker-Technologie. Er verfügt über Mandantenfähigkeit, d. h. es können mehrere Institutionen innerhalb derselben Server-Instanz verwaltet werden. Weitere nützliche Features sind eine Template-Verwaltung für SEB-Konfigurationen, sowie eine mehrstufige Benutzer- und Berechtigungsverwaltung. Details finden sich auf der GitHub-Projektseite und in der Online-Dokumentation.

Im Juni 2020 wurde die Version 1.0 des SEB Servers veröffentlicht. SEB Server wird künftig noch um Schnittstellen zu weiteren Learning Management Systemen erweitert. Insbesondere eine Moodle-Integration steht hier im Fokus, daneben sind Integrationen in weitere System wie ILIAS oder OpenOLAT möglich.

Zu den neuen Entwicklungen im Bereich des  Safe Exam Browsers informieren wir demnächst in einem weiteren Beitrag im LET-Blog

Read more

Posted on ,

Deeper integration. Moodle and Safe Exam Browser take their relationship to the next level

For many years, ETH has been using two open-source software projects: Safe Exam Browser (SEB) and Moodle are the foundation of online assessment at ETH Zürich. They work together seamlessly but the management of SEB configurations is somewhat complicated. With the brand-new release of Moodle 3.9 in early June 2020, the integration was improved significantly to support a number of different online exam scenarios. For example, in a bring your own device (BYOD) scenario admins now have the possibility of enabling teachers to configure SEB settings directly in a quiz. Admins can manage templates of SEB settings that are provided to teachers via the quiz settings in Moodle.

A development project between two open source communities

Two popular open source software close ranks
Two popular open source software close ranks.

How did this come about?

In 2017, a member of LET was on sabbatical in Australia and visited Moodle’s headquarters. During this visit, the idea arose that these two popular open-source solutions should work together on a deeper level. Together with the Moodle team at Bern University of Applied Science, the Moodle team at ETH Zürich planed a development project and wrote user stories. During this phase, it became clear, that the estimated amount of work could not be handled by these two institutions alone.

A crowdsourced project – eight universities and two companies acted in concert

With the user stories in our hands, we reached out to communities like SAMoo, MoodleDACH, SIG E-Assessment and Moodle-Forum. After dozens of calls, meetings and discussions, a further six universities (Zurich University of Applied Sciences, Ruhr-Universität Bochum, Berlin School of Economics and Law, University of Applied Sciences Neubrandenburg, University of Applied Sciences Upper Austria, University of Applied Sciences and Arts Hannover) agreed to contribute to the project. The code was developed by Catalyst IT, a Moodle Partner in Australia, with advice and support by Moodle HQ and the SEB development team. A diverse project team then worked together closely to find solutions for very different exam scenarios. Within only six months the integration was submitted for integration in the Moodle core version 3.9 and new SEB versions were published.

Configure Safe Exam Browser directly in Moodle

Screenshot of the new settings in the quiz activity.

Usually Safe Exam Browser has to be configured via its Config Tool. In Moodle 3.9 teachers can do this directly in Moodle and for each separate Quiz activity. This is extremely helpful for e-assessments in a BYOD scenario. There is no need to provide a separate SEB settings file to students before the exam anymore. As students access the quiz, Moodle hands out the SEB configuration file and SEB is reconfigured as required by the student’s device. This means teachers can set a different SEB quit password for every attempt, configure different additional software for different quizzes, and because it is that easy to configure you can consider using SEB in formative quizzes to help students to focus.

Project learnings

This project was challenging in many ways. There were many different stakeholders, tight deadlines, two big open source communities, and different locations and time zones. Here are the most important learnings.

  • Joint projects are possible but need an intense discussion in the community. Only about 30% of all interested institutions were able to invest in the project.
  • Scenarios and user stories are excellent methods for creating a shared understanding of the requirements and objectives between all project partners.
  • To prevent endless discussion, identify one or two strong partners, create provisional scenarios and user stories and then reach out to the wider community.
  • Communication over different time zones is challenging (especially if you have weekly sprints) but manageable. Be mindful to share recordings and keep an active chat channel alive.
  • When faced with obstacles (such as lack of funds) think out of the box and search for allies, that are willing to pursue the same goals as you. There is almost always a solution.

Conclusion

Despite the challenging nature of this project and the different needs of stakeholders, the objectives were met. It was a pleasure to have been part of this joint community project together with Moodle HQ and SEB core team. We are sure the outcome of this project will bring online assessment to a new level.

The new Moodle version can be downloaded here.  The Safe Exam Browser is available here.

Read more

Posted on ,

Being more human online together

Many useful lists provide technical recommendations for optimising virtual meetings (here is one from the IT Services Team at ETH Zurich. In this short post, though, I will go one step further. In video conferencing we are, quite literally, “hosts”. This makes us responsible for managing the behaviour of others, especially if it is disruptive. In this context it is important to communicate our expectations clearly. This requires us to consider our own requirements and the needs and circumstances of those participating in our virtual sessions, and to find a balance between them.

Teaching involves relationships, firstly between lecturers and students, but also among students themselves. Therefore, effective facilitation addresses the needs inherent in human relationships and how we can respect these in the virtual environment.

Let’s look at some key aspects to be considered:

Eye contact

Eye contact conveys attention and interest. (Keep in mind here that some cultures prefer to avoid direct eye contact.) If you wish to transmit a sense of eye contact, you can do this by looking directly into the camera and not at the face of the person you are talking to, even if this means that you yourself lose eye-contact. One tip is to minimize the facial images and move them to the top of your screen, near your camera. Your gaze is then close to the camera, but focused on faces.

Names

Many video conferencing tools allow participants to change the name displayed alongside their image. Consider asking students to adjust this to their preferred name. “SmiJo” is a lot less personal than “John Smith” or even just “Johnny”.

Rapport

To build rapport, take the time to make people feel acknowledged and welcome at the beginning of a virtual session. Create space for “warming up” with smalltalk before launching into the reason for the session. Depending on the size of the group you may wish to greet individuals by name when they appear, even if they are late. If the group is large, you can still acknowledge latecomers en masse. Trusting that their reasons for being late are legitimate will help to create an atmosphere which is conducive to learning.

Sound

Think about how you want participants to manage sound. Is it important to minimise background noise? The more participants there are, the more distracting background noise can be. However, in smaller informal settings, ambient noise can help people feel connected – an important consideration the longer we are in physical isolation. Agree on how the mute button should be used.

Video

A common belief is that all participants should switch on their cameras when joining a Zoom meeting. However, this may be difficult for various reasons: attendees may not have a camera; there may be other people around; or their bandwidths may not be able to cope with video. Some people are also profoundly uncomfortable with displaying themselves on camera for long periods.

Lecturers should therefore consider why they want students to turn their cameras on. Then they should articulate their expectations, and consider equally acceptable alternatives. Do cameras really need to be switched on? If so, is that for the whole meeting? For example, if the meeting is long, but not particularly interactive, the lecturer might ask the students to turn their cameras on at the beginning to “establish contact”, but say that it’s OK to switch them off later. This might be especially relevant if everyone is viewing slides, for example. Using “hide self view” can also minimise the cognitive fatigue we are all experiencing due to the increased frequency of online meetings and length of time spend in video conferences.

Remember that not everyone thinks about how they appear on screen: it may be useful to give people feedback and guidance in this area. Their lighting may make their images too dark to see, or if the video is flickering it can be hard on the eye after a time.

Background

The background displayed on the screen can be both informative and distracting. Students may choose virtual backgrounds to obscure a messy room or one that reveals things they prefer to keep private (such as family photos or an extensive wine collection). If their choice of background is too distracting you should let them know. Conversely, you can use the virtual background function as a way to connect. Ask people to share a photo of a place meaningful to them, or an image that provides comic relief!

Movement

As the host, when you view a gallery of many faces your eye will naturally be drawn to movement. If people join via mobile phone or tablet, they are likely to be more mobile and may move around in their spaces. This is sometimes unavoidable, but it can be very distracting. Make participants aware of this and ask them to deactivate their screens if they change positions or if they are moving around a lot.

Chat

Think about the best way to use the chat function. Will you be monitoring it actively, or not at all? Would you like people to use the chat to announce their departure from the session? Most video conferencing tools offer multiple ways to communicate. Tell your students how you want them to use them.

The intention of this post was to encourage you to think broadly about how you run a virtual meeting or lecture as well as how you manage your own on-screen behaviour. Our available technology provides us with so many options, but these sometimes generate divergent behaviour. Here establishing fair expectations will go a long way towards facilitating a successful virtual event.

Read more

Posted on , 1 Comment