The Wizard of Oz (WOz) experiment

Performing the Wizards of Oz – Written by Martin Porcheron

The Wizard of Oz experiment (WOz) is a research approach in which an intelligent system is presented to users, typically as part of a research study. Unbeknownst to the user, the presented intelligence is a mirage, with the gubbins of the supposedly intelligent system run by a human operator pulling metaphorical levers. In other words, the intelligence is a fiction. In an article presented at ACM CSCW 2020, and due to be published in Proceedings of the ACM on Human-Computer Interaction, we take a look at our use of the method and unpack the interactional work that goes into pulling of the method. In other words, we pull back the curtain on the method. This blog post is a bit of a teaser, focusing solely on some of the elements of collaboration that we identified in the article.

Alternatively, instead of (or in addition to) reading this blog post, you can also watch the presentation on YouTube (it was a virtual conference for 2020 for obvious reasons). This presentation includes a short video clip from the data we collected if you want to get a feel for how the study unfolded.

https://youtu.be/Ja8xwxV0he0

As you can probably guess, the method’s name comes from the L Frank Baum novel The Wonderful Wizard of Oz. Early use of the method in HCI took less exciting names like ‘experimenter in the loop’1. A WOz approach offers the ability to prototype and potentially validate—or not—design concepts through experimentation without the costly development time that a full system may require2. Approaches have included simulating things such as a ‘Listening Typewriter’3 and public service information lookup for a telephone line4. In WOz, different elements may be simulated, ranging from database lookup through to mobile geolocation tracking5. Due to the recent commericalisation of voice recognition technologies, there is a plethora of literature using the approach for studies in voice interface design, with natural language processing being the simulated component. I’d guess that’s because building natural language interfaces is a costly endeavor (monetarily and timewise).

In our paper, we look at the use of a voice-controlled mobile robot for cleaning, where we simulated the natural language processing of the voice instruction, and conversion of this into an instruction to a robot (i.e. the Wizard listened to requests and controlled the robot). We were running RoboClean as part of a language ellicitation study, although that’s really the focus of the paper. Cruically our study required two researchers to operate the proceedings: one scaffolded the participant interaction and the other performed the work of the ‘Wizard’, responding to participants’ requests and controlling the vacuum.

Collaboration was key

In the paper we go into much more detail, focusing on the various aspects needed to pull off such a study, starting with the how the ‘fiction’ of the voice-controlled robot is established and presented to users, through to how the researchers running the study attend to a technical breakdown while running the study. We progressively establish the fiction as an interactional accomplishment between all three interactants (i.e. the two researchers and the participant).

The researcher, who in our study stands with the participant, introduces the scenario, shows the robot to the participant, and guides them into instructing it (i.e. they scaffold the participant’s involvement in the study). The participant ostensibly talks to and responds to the vacuum. The Wizard—who is listening—responds to the request, in accordance with the fiction presented by the researcher and the notions of what a voice-controlled vacuum robot might reasonably respond to. It’s the Wizard whom the participant is really instructing in such a study (as the voice-controlled robot is but a fiction). The researcher standing with the participant then must performatively account for the actions taken by the Wizard according to that fiction. In other words, whatever ‘the robot does’, the researcher must attribute its actions to the robot to conceal the machinations of the Wizard.

There are other challenges, of course, that make this harder: the Wizard must respond to the participants’ requests in a way consistent with the fiction quickly and consistently in order to ensure the methodological validity of the study. We also discuss a situation in the article where there is a technical glitch with the robots, requiring both researchers to work together in an improvised manner to uphold the secrecy of the Wizard, while trying to collaboratively resolve the issues face.

Given the dramatic naming of the approach, we describe this accomplishment as a triad of fiction, taking place on the ‘front stage’ (with the Wizard working ‘backstage’). Around the same time, others also referred to this as ‘front channel’ and ‘back channel’ communication6. See the figure for how we pictorially represent the communication between the various interactants in our study.

Practical takeaways

Above I’ve focused on the collaboration required to pull of the study, we also devote a fair chunk of the article to detailing the practical steps we took in implementing the study design and running the study. With this, we discuss how we used various technologies, piecing them together to present a believable ‘voice-controlled robot’. We had a shared protocol document that both the researcher and the Wizard used to maintain awareness of each other’s actions and an outline script that detailed the sorts of requests that the robot would respond positively (or not) to, and this was progressively updated throughout the studies. While we frame running a WOz study as a performance, we were keen to stress the methodological obligations involved too: the performance must be undertaken according to methodologically valid research practice. We argue this requires meticulous care and attention, and that this is driven by the collaboration of the researchers throughout.

 

 

 

For future updates please follow the Smart Products Beacon website – June 2020

Our Products Campaign targeted business sectors that traditionally revolve around different kinds of physical products and explored how these could be transformed through emerging Internet of Things technologies, coupled to human data. We were able to take this forward jointly with the University of Nottingham Smart Products Beacon which has continued the work started in Horizon.

Demonstrator projects continue to make progress with latest updates available on the Smart Products Beacon blogsite.

Cleaning a factory with robots – RoboClean

In food and drink manufacturing, a significant amount of employee time is dedicated to cleaning, which bears a major impact on employee productivity and manufacturing efficiency. The process of cleaning factory equipment typically unfolds as part of a process known as Clean-in-Place and is beginning to take advantage of novel technologies such as in-line sensors, the IoT, and machine learning. However, the work of cleaning the factory floor is still primarily completed by human workers following strict industry standards specified by the British Retail Consortium (BRC).

RoboClean seeks to understand and address the industry need for cleaning support technologies and is developing systems for deploying robots to assist in the cleaning of factories. Furthermore, the robots will be designed to detect and report the unwanted presence of allergens to prevent food safety events using smart sensor data analytics (e.g. for wheat gluten proteinpeanuts in cereals, and peanuts in wheat flour). Additionally, the project aims to tackle one of the biggest challenges facing manufacturers, which is the cross contamination of allergens within the manufacturing environment. Regular cleaning is a critical step to preventing this, but this challenge is exacerbated as manufacturers strive to provide more variety and alternative formulations (e.g. gluten free) and are required to verify the effectiveness of cleaning procedures for removing allergens from equipment as per the BRC industry standards. The Food Standards Agency states that the number of food and safety events relating to all allergens has roughly doubled between 2014/15 and 2017/18 highlighting the pressing need to integrate smart sensors into the manufacturing and cleaning processes.

Furthermore, a key focus for the project is to develop an understanding of human-robot collaboration in complex environments such as factories (building upon studies of robots in-the-wild), and how to coordinate multiple cleaning robots as co-bot teams (i.e. multi-agent collaboration). These foci will help to deliver novel solutions for monitoring and delivering cleaning to the required standards in an efficient and safe manner, alongside–and with– human workers on a factory floor. The outcomes of this project will include the design, implementation, and evaluation of an interactive connected system enabling novel human-robot collaboration and sensor data collection in a factory by engaging with partners in industry (British Pepper and Spice) and the third sector (the Food and Drink Forum).

RoboClean is led by Joel E. Fischer and Nik Watson, and formed of members from across four departments at the University.

Martin Porcheron, Joel E. Fischer, Stuart Reeves, and Brian Logan
School of Computer Science, University of Nottingham

Carolina Fuentes
Horizon Digital Economy Research, University of Nottingham

Roberto Santos
Digital Research Service, Information Services, University of Nottingham

Ahmed Rady and Nik Watson
Faculty of Engineering, University of Nottingham

Funding

This project is funded by the University of Nottingham Smart Products Beacon of Excellence and Horizon Digital Economy Research.

This post’s content is based upon work by all members of the project, and previous project summaries.

Cleaning a factory with robots

 

Smart Products Beacon – “Sensors support machine learning”

Nicholas Watson, Assistant Professor, Faculty of Engineering discusses whether online sensors and machine learning can deliver industry 4.0 to the food and drink manufacturing sector in the Journal of the Institute of Food Science and Technology, vol 33 issue 4 December 2019.

“Manufacturing is experiencing the 4th industrial revolution, which is the use of Industrial Digital Technologies (IDTs) to produce new and existing products. Industrial digital technologies include sensors, robotics, the industrial internet of things (IoT), additive manufacturing, artificial intelligence, virtual and augmented reality, digital twins and cloud computing. At the heart of Industry 4.0 is the enhanced collection and use of data. Industry 4.0 is predicted to have a positive impact of over £450bn to UK manufacturing over the next ten years[1], with benefits such as increased productivity and reduced costs and environmental impacts. But what does this mean for the UK’s largest manufacturing sector, food and drink?”

Link to article (page 20)

University of Nottingham Smart Products Beacon – job opportunity

Research Associat/Fellow (fixed term)

Reference: SC1494719

Closing date: Tuesday 4th February 2020

Job Type: Research

Department: Smart Products Beacon Computer Science

Salary:  £27511 – £40322 per annum (pro rate if applicable) depending on skills and experience (minimum £300943 with relevant PhD). Salary progression beyond this scale is subject to performance

Applications are invited for a Computer Science and/or Engineering based Research Associate/Fellow within The Smart Products Beacon.

The Smart Products Beacon explores how leading edge technologies emerging from Computer Science and Engineering can fundamentally disrupt the nature of products and how they are made. This University led initiative tackles how the combination of physical and digital technologies, from robotically-enabled and additive manufacturing to artificial intelligence and mixed reality, can produce smarter and better products. We also work to ensure that products are produced in responsible ways to embody the fair and transparent use of personal data, operate safely, and respect human values.

The purpose of this role will be to support the Smart Products Beacon in establishing its research agenda by contributing to the creation of an independent research program, linking across a number of disciplines to develop, deploy and study Beacon related projects. A preference will be given to applicants in the following areas, but other skills will be considered if clear evidence of their link to the Beacon can be provided.

  • Software platform development
  • Artificial intelligence
  • Security
  • Development and integration of sensors and interfaces
  • Advanced manufacturing techniques (robotics, additive manufacturing, etc.)
  • User studies

The post holder will be expected to:

  • Create and lead an independent research program
  • Work as part of a multi-disciplinary team to enhance impact
  • Have the flexibility to work on several ongoing projects while developing their own work
  • Contribute to, and lead, high quality publications and proposals

The role holder will have the opportunity to use their initiative and creativity to identify areas for research, develop research methods and extend their research portfolio.

This is a full time, fixed term post for 3 years. Job share arrangements may be considered.

Informal enquiries may be addressed to Professor Steve Benford.  Applications must be submitted online; please note that applications sent by email will not be accepted.

Our University has always been a supportive, inclusive, caring and positive community. We warmly welcome those of different cultures, ethnicities and beliefs – indeed this very diversity is vital to our success, it is fundamental to our values and enriches life on campus. We welcome applications from UK, Europe and from across the globe. For more information on the support we offer our international colleagues, visit; https://www.nottingham.ac.uk/jobs/applyingfromoverseas/index2.aspx

Professor Steve Benford explains the Smart Products Beacon

The Smart products beacon is tackling two big questions. What are smart products? And how are they made?

A smart product is one that uses digital technologies and especially personal data to become more adaptive, personalised and valuable. It captures data throughout its lifetime – through both manufacture and use – and uses this to adapt itself to consumers. In so doing it blends aspects of goods, services and experiences, the three dominant product logics from economics and business into new forms. Sounds a bit abstract? Let’s take an example .…

There was a time when a car was made of rubber and iron. A car is also something you bought and owned. But those days are passing. A modern car is part software, containing an engine management system that can adapt its driving behaviour, and also hosts a variety of other services for navigation and entertainment. Some might say the modern car is really a mobile phone on wheels. For many consumers, a car is now also now a service that they lease rather than a good that they own.

But the transformation doesn’t end there. In a possible future world of autonomous cars, mobility itself may be the service, with consumers summoning vehicles on demand that adapt themselves on the fly to their preferences and history of previous travel. In this world, the physical units become interchangeable and it is the data that matters. You step into a car and it becomes yours by loading your personal profile and adapting itself to you. In those case the car is the data. As Neo learns when he visits the Oracle: “There is no spoon” (only data).

If smart products are made from new materials – personal data – then they are also made in new ways. Digitally native products such as social media are inherently co-created. Consumers either explicitly provide content in the form of the videos and photos they upload directly or implicitly provide it through their records of searches, views and likes. Smart products, even future cars, will be similarly co-created as both manufacturers and consumers engage with digital platforms and data-driven product life-cycles.

This raises a further important question – how can consumers trust future products with their personal data? How can they be sure that products are safe and secure and that they can retain control of their own data?

This vision of co-creating trusted smart products lies at the heart of our beacon. We think that it applies to all manner of products, from high value goods to consumer goods to digital media experiences. We’re looking forward to exploring the possibilities further over the coming years.

keynote talk by Sarah Brin, Strategic Partnerships Manager, Meow Wolf

The Smart Products Beacon is delighted to be supporting a keynote talk by Sarah Brin, Strategic Partnerships Manager, Meow Wolf at the Broadway on Monday 9th December, 6pm.

Sarah will speak about the creative challenges and questions surrounding the development of immersive experiences supported by emerging technologies.

An art historian and creative producer, Sarah specialises in previously unanticipated situations involving technology, the public, and organisational change/infrastructure. She’s created programs, exhibitions, and publications for organisations like Autodesk, SFMOMA, British Council, MOCA Los Angeles, the European Union and elsewhere. She cares about building just, sustainable and inviting things.

Sarah will cover key aspects of Meow Wolf’s creative process, recommendations for creatives working at the intersection of art and technology, and address questions regarding the responsibilities of cultural producers in times of dire political crisis.

Meow Wolf are a New Mexico-based arts and entertainment group creating immersive and interactive experiences that transport audiences of all ages into fantastic realms of story and exploration. This includes art installations, video and music production, and extended reality content.

Meow Wolf’s radical practice champions otherness, weirdness, radical inclusion and the power of creativity to change the world.

Book your tickets here.

Connected Everything II: Launch of Feasibility Studies Call

Connected Everything is the EPSRC funded network focussed on addressing the question “how do we support the future of manufacturing in the UK?”. In our first three years of funding, we supported the Manufacturing Made Smarter proposal development, including directly inputting into the definition of its key research challenges. We have now been awarded a further three years funding to deliver a network of networks which will accelerate multi-disciplinary collaboration, foster new collaborations between industry and academia and tackle emerging challenges which will underpin the UK academic community’s research in support of people, technologies, products and systems for digital manufacturing. Through a range of activities, including feasibility studies, networking, and thematic research, Connected Everything II (CEII) will bring together new teams within a multidisciplinary community to explore new ideas, demonstrate novel technologies in the context of digital manufacturing, and accelerate impact of research into industry.

As one of our initial activities, we are launching our first funding call for feasibility studies at this event in London on the morning of 28 November. Places are limited so please register early.

 

 

My internship on the RoboClean project – Jane Slinger

My internship with the RoboClean team involved developing a custom Alexa skill to control Neato vacuum cleaners by voice. This will enable further development to link with the voice interface if required, as the other aspects of the project involve web systems and multi-agent systems. I also helped run a study to find out how users would interact with the potential system in a lab environment.

I enjoyed the work as it was in an area that interested me and had some challenges in the code to overcome, leading me to learn more about how the systems worked to explore different solutions. It was nice to be able to build on skills about Alexa development learnt in my 3rd year project and include linking to the neato API through HTTP requests and a 3rd party library. This included setting up the Account Linking on the Alexa skill and then adapting some of the code from libraries to work with node.js on the backend instead of front-end JS-based methods that were already in place.

Designing the interactions with the robot and the user was also very interesting as I wanted to make sure that the system would prompt for the necessary information about the robot, and location to clean, without becoming annoying for the user.

The internship will help with my studies and future work as it has given me experience of working with a research team, building on areas I had some experience in as well as expanding to other technical skills that I hadn’t used before, and will be useful in the future.

Written by Jane Slinger

I-CUBE call for Participants

We are looking for participants for the I-CUBE project’s first study, taking place at the School of Computer Science, this November on Jubilee Campus.

This initial call is for employees of the University and members of the public, more generally. We will make a separate call for student participants. All participants need to be 18 years old or over.

If you are interested in taking part please use this Doodle link: https://doodle.com/meetme/qc/8tbM005BB7 to select your appointment and participate in our study.

The study’s task is to instruct a trainee ‘robot’ to sort a pile of clothes into separate washing loads according to a detailed list of tasks. This is to examine human interactions in a prescribed situation. There is a short questionnaire-interview to complete after the task.

You will be both video and audio recorded while instructing and responding to the trainee ‘robot’ as well as audio-recorded for the interview.

The experiment is expected to take approximately 45 minutes of your time and you will be reimbursed with £10 worth of shopping vouchers.