The Wizard of Oz (WOz) experiment

Performing the Wizards of Oz – Written by Martin Porcheron

The Wizard of Oz experiment (WOz) is a research approach in which an intelligent system is presented to users, typically as part of a research study. Unbeknownst to the user, the presented intelligence is a mirage, with the gubbins of the supposedly intelligent system run by a human operator pulling metaphorical levers. In other words, the intelligence is a fiction. In an article presented at ACM CSCW 2020, and due to be published in Proceedings of the ACM on Human-Computer Interaction, we take a look at our use of the method and unpack the interactional work that goes into pulling of the method. In other words, we pull back the curtain on the method. This blog post is a bit of a teaser, focusing solely on some of the elements of collaboration that we identified in the article.

Alternatively, instead of (or in addition to) reading this blog post, you can also watch the presentation on YouTube (it was a virtual conference for 2020 for obvious reasons). This presentation includes a short video clip from the data we collected if you want to get a feel for how the study unfolded.

https://youtu.be/Ja8xwxV0he0

As you can probably guess, the method’s name comes from the L Frank Baum novel The Wonderful Wizard of Oz. Early use of the method in HCI took less exciting names like ‘experimenter in the loop’1. A WOz approach offers the ability to prototype and potentially validate—or not—design concepts through experimentation without the costly development time that a full system may require2. Approaches have included simulating things such as a ‘Listening Typewriter’3 and public service information lookup for a telephone line4. In WOz, different elements may be simulated, ranging from database lookup through to mobile geolocation tracking5. Due to the recent commericalisation of voice recognition technologies, there is a plethora of literature using the approach for studies in voice interface design, with natural language processing being the simulated component. I’d guess that’s because building natural language interfaces is a costly endeavor (monetarily and timewise).

In our paper, we look at the use of a voice-controlled mobile robot for cleaning, where we simulated the natural language processing of the voice instruction, and conversion of this into an instruction to a robot (i.e. the Wizard listened to requests and controlled the robot). We were running RoboClean as part of a language ellicitation study, although that’s really the focus of the paper. Cruically our study required two researchers to operate the proceedings: one scaffolded the participant interaction and the other performed the work of the ‘Wizard’, responding to participants’ requests and controlling the vacuum.

Collaboration was key

In the paper we go into much more detail, focusing on the various aspects needed to pull off such a study, starting with the how the ‘fiction’ of the voice-controlled robot is established and presented to users, through to how the researchers running the study attend to a technical breakdown while running the study. We progressively establish the fiction as an interactional accomplishment between all three interactants (i.e. the two researchers and the participant).

The researcher, who in our study stands with the participant, introduces the scenario, shows the robot to the participant, and guides them into instructing it (i.e. they scaffold the participant’s involvement in the study). The participant ostensibly talks to and responds to the vacuum. The Wizard—who is listening—responds to the request, in accordance with the fiction presented by the researcher and the notions of what a voice-controlled vacuum robot might reasonably respond to. It’s the Wizard whom the participant is really instructing in such a study (as the voice-controlled robot is but a fiction). The researcher standing with the participant then must performatively account for the actions taken by the Wizard according to that fiction. In other words, whatever ‘the robot does’, the researcher must attribute its actions to the robot to conceal the machinations of the Wizard.

There are other challenges, of course, that make this harder: the Wizard must respond to the participants’ requests in a way consistent with the fiction quickly and consistently in order to ensure the methodological validity of the study. We also discuss a situation in the article where there is a technical glitch with the robots, requiring both researchers to work together in an improvised manner to uphold the secrecy of the Wizard, while trying to collaboratively resolve the issues face.

Given the dramatic naming of the approach, we describe this accomplishment as a triad of fiction, taking place on the ‘front stage’ (with the Wizard working ‘backstage’). Around the same time, others also referred to this as ‘front channel’ and ‘back channel’ communication6. See the figure for how we pictorially represent the communication between the various interactants in our study.

Practical takeaways

Above I’ve focused on the collaboration required to pull of the study, we also devote a fair chunk of the article to detailing the practical steps we took in implementing the study design and running the study. With this, we discuss how we used various technologies, piecing them together to present a believable ‘voice-controlled robot’. We had a shared protocol document that both the researcher and the Wizard used to maintain awareness of each other’s actions and an outline script that detailed the sorts of requests that the robot would respond positively (or not) to, and this was progressively updated throughout the studies. While we frame running a WOz study as a performance, we were keen to stress the methodological obligations involved too: the performance must be undertaken according to methodologically valid research practice. We argue this requires meticulous care and attention, and that this is driven by the collaboration of the researchers throughout.

 

 

 

AI Technologies for Allergen Detection and Smart Cleaning Food Production Workshop

In collaboration with the AI3 Science Discovery (AI3SD) and Internet of Food Things (IoFT) EPSRC Networks the RoboClean team ran a workshop in London on the 17th of October. The focus of the workshop was to discuss how digital technologies such as AI, sensors and robotics can be used for enhanced allergen detection and factory cleaning within food production environments. The workshop was well attended by a range of stakeholders from industry, academia and organisations such as the Food Standards Agency. The morning of the workshop had three speakers. Nik Watson from the University of Nottingham gave a talk on the future of factory cleaning. This talk covered a range of research projects from the University which developed new digital technologies to monitor and improve factory cleaning processes. The second talk was from AI3SD lead Jeremy Frey from the University of Southampton. Jeremy’s talk gave an introduction to AI and covered a range of new sensors which could be used to detect the presence of allergens in a variety of food products and environments. The final talk was delivered by Martin Peacock from Zimmer and Peacock, a company who develop and manufacture electrochemical sensors. Martin gave an introduction to the company and the technologies they develop before demonstrating how there sensor could be connected to an iPhone and determine the hotness of chilli sauce. Martin’s talk finished by discussing how electro chemical sensors could be used to detect allergens within a factory environment. The afternoon of the workshop focused on group discussions on the following the four topics – all related to allergen detection and cleaning within food production:

  • Data collection, analysis and use
  • Ethical issues
  • Cleaning robots
  • Sensors

Each group had a lead, however delegates moved between tables so they could contribute to more than one discussion. At the end of the workshop the lead from each group reported back with the main discussion points covered by the delegates. The delegates on the ‘robotics’ table reported that robots would play a large role in the future of factory cleaning as they would free up factory operators to spend time on more complicated tasks. The group felt that the design of the robots was essential and discussed that new factories should also be designed differently to facilitate robot cleaning more easily. The group also thought that effective communication with the robot was a key issue which needed further research. The ‘sensors’ group reported that any new sensors used to detect allergens or levels of cleanliness would need to fit into existing regulations and practices, but would be welcomed by the industry, especially if they could detect allergens or bacteria in real-time. The ‘data’ group reported that there was a need for data standards relevant to industrial challenge and there was also a need for open access data to enable the development of suitable analysis and visualisation methods. The ‘ethics’ group discussed numerous key topics including, Bias, Uncertainty, transparency, augmented intelligence and the objectivity of AI.

RoboClean update 11/6/2019

The RoboClean project is investigating the work of cleaning factory floors, and the potential for robotic cleaners to work alongside—and with—human operators to ensure factories meet the strict industry hygiene guidance. These robots will use the latest sensors to also detect the presence of food allergens, allowing factory managers to avoid cross-contamination of products, especially in batch-driven processes.

The project will deliver and evaluate an interactive connected platform to enable novel human-robot collaboration and IoT smart sensor data collection in food factories. See our prior blog post for more information about the project. In this post we would like to present an update of our progress.

We are engaging with local SMEs and multinational food manufacturers to understand more about the sorts of environments we envisage these technologies will be deployed. Through interviews, workshops, and factory visits we intend to explicate the requirements and challenges—both legal and socio-technical—for deploying robots to complex environments such as factories. These visits are now on-going and the outcomes of these will inform the project’s design work. This work is being led by Martin Porcheron in Computer Science.

Roberto Santos, from the University of Nottingham Digital Research Service (DRS), has joined the project and is collaborating with Carolina Fuentes from the Horizon Digital Economy Research Institute on the development of our demonstrator robot platform. This platform, when complete, will support the autonomous and manual management of robot teams as well as individual robots. We are also currently in the process of developing a number of elicitation studies to understand the language and sorts of commands factory workers would use to direct and coordinate robots. Our focus at this stage is to deliver a platform suitable to control one robot at a time, and this is already taking shape with elicitation studies supporting this development process. Brian Logan from the Agents Lab in Computer Science is working with the team to ensure the platform design is suited to our multi-agent collaboration goals that will be delivered in later stages of the project.

Ahmed Rady from the Faculty of Engineering has also recently joined the project and is developing the processes for the smart sensors to detect various allergens, including collecting data that will be vital for the detection of these allergens. One of the biggest challenges facing manufacturers is the cross contamination of allergens within the manufacturing environment, and cleaning is a critical step in preventing this. By deploying sensors with the robots, we will be able to detect and potentially prevent any food safety events before product leaves the factory.

Overall, the team is already working towards developing deliverables and is looking forward to a successful 2019.

Finally, the team will be presenting a poster at the ConnectedEverything 2019 conference in June, where we will be on hand to discuss the project’s objectives, approach, outcomes, and potential collaborations. We think this is a great opportunity to connect with potential partners in the manufacturing industry and look forward to seeing you there.

Written by Martin Porcheron

 

RoboClean: Human Robot Collaboration for Allergen-Aware Factory Cleaning

Human-Robot collaboration is expected to increase in the following years [Elprama, 2017]. The RoboClean project will investigate the potential of human-robot collaboration, integrated with IoT sensors for cleaning and allergen detection on a factory floor. We will be exploring, the impact of introducing a robot that can teamwork “hand to hand”, or better said  “hand to robo-hand” with a factory worker – focusing on assisting them rather than replacing their role.

RoboClean targets the food industry – supporting the cleaning and safety process in an industry where the presence of an allergen is a safety risk, and also given the relevance of this sectors annual contribution of £28 billion to the economy.

Early stages of the project involved a visit to a bread factory in Birmingham to learn about current work practices in food factory cleaning, and gain a better understand the social context of food factories. In addition, we built and evaluated a “Human-Robot Collaborative-Hub” – effectively the “brain”, combining a voice interface and a robot cleaner. This “hub” will store data on robot activity, to identify zones to be cleaned through voice interface control.  This activity enabled us to identify what functions and procedures (API) were required to control the robot.

The next stage of the project will involve designing and developing the architecture of the “Human-Robot Collaborative-Hub” as a bridge between cleaner robots’ and different user interfaces – to explore controlling robot interaction in different areas through voice interfaces, for example:  “Robot, clean around the round table” or “Robot, clean around the fridge”. We are also working on integrating a sensor capable to detect allergens, with the aim of directing the robot cleaner to specific locations using data from the “HR-Collaborative-Hub”.