Charlotte Gray shares her experiences of working on RoboClean

I was introduced to the RoboClean project at Horizon whilst interning with the Advanced Data Analysis Centre. The project investigates the ways in which end-users interact with a robot vacuum cleaner and how a robot responds to user utterances; the aim being to inform its effective design and use within food factories.

I was invited to continue my internship for 5 more weeks within Horizon to help with the analysis of data collected through an elicitation study. Overall, this has been a really valuable and rewarding experience. Coming from an academic background in Sociology, I found working closely with researchers specialising in Computer Science exposed me to different research aims and challenges than I had previously encountered. This has been insightful for me as it has not only helped develop new skills in research analysis and interview techniques, but also applied the principles of a range of research methods gained during my academic studies over the past 2-years to cutting edge technological developments.

I have been responsible for transcribing participants’ audio data, analysing visual data, and creating a summary written report of participants’ interview responses. The focus of the report was on the benefits, limitations, and disadvantages experienced by users from the user-robot interactions. The attendance at a range of team meetings has also been beneficial in understanding interactions within a work environment, especially where individuals are working together from across a range of disciplines. Combined with the skills I have learned at workload prioritisation and management, this has made me confident to face future work situations and dilemmas. Additionally, I have written literature reviews on the topic of human-robot interaction. Being able to explore these new topics has also helped me see how issues explored in Sociology are becoming increasingly influenced by the world of technology, for example, how individuals’ day-to-day lives are mediated by the introduction of robots to the workplace. The multidisciplinary projects throughout Horizon have therefore also been interesting to work alongside, clearly showing the benefit of collaborative projects in producing innovative findings.

Contributing to a research project which is aiming for publication in a research journal has been hugely rewarding and exciting, and has made the idea of working in a similar environment after graduating a lot more persuasive.

Written by Charlotte Gray

Smart Products Beacon – Soonchild and Creative Captioning – Tailoring theatre productions for D/deaf audiences

For theatre audiences on a spectrum from D/deaf to hard of hearing, it is often difficult to keep up with performances. Even in cases where the performance is signed, or has captions, these accessibility additions often feel ‘tacked on’ and are typically located out of the action on stage, requiring audiences to share attention between the performance and the support. Working with Red Earth Theatre, a production company with a long history of “Total communication” in which actors sign on stage, we have been developing ways to deliver accessibility right into the heart of a performance.

Red Earth’s new show Soonchild, is touring the UK now, supported by funding from the University of Nottingham Smart Products Beacon, as well as the AHRC and the Arts Council. The show is captioned right across the set with beautiful, designed in, images, video and text delivered using new software developed at the Mixed Reality Laboratory.

The project team developed a software called ‘captionomatic’ which uses the principles of projection mapping to turn whole theatre sets into projection surfaces. While projection mapping itself is by no means a new concept, our approach has been to both simplify the process and to fit it into the wider theatre-tech ecology. Our innovation is to take a 3D model of the set – easily produced from the scale-model of the set pieces typically built for any performance, and project this onto the real set, using a simple system of point-matching to correctly align the physical set with its digital twin. Once that 3D model is in place, we are then able to project images, video, text and whatever else onto those set pieces respecting occlusion and creating an immersive canvas on which to display content.

We provide tools to read in the script from a word document, produce a compete set of captions, then generate the necessary cues which can be fired by QLab (or similar theatrical management software) to drive our system. Theatre designers need only edit the target locations and the look and feel of the text to create beautiful captions around their sets. Different sets of captions can be delivered for different audiences as necessary – so some shows may be fully captioned while others may only have key points highlighted. We know from our research that different audiences have different preferences for how captions are delivered, and our system allows theatre companies to quickly and confidently make adjustments – even between performances of the same show. Setup of the system in a new location takes only a few minutes, something that is absolutely necessary for touring productions.

More broadly, this new approach to projection mapping allows substantial creativity with digital media in theatre that extends beyond accessibility. Critically, it substantially reduces the technical barrier to entry of including projection-mapped media in a show.  Soonchild demonstrates this with some beautiful interactions between live actors and pre-recorded media – in this case shadow puppetry, projected on set as if live.

The software was demonstrated at an accessible theatre technology day at Wolverhampton Arena theatre, and plans for additional workshops and training are in the works. Despite being developed for Soonchild, the software has been designed to be easily applicable to many different types of shows and thus is open source and free, requiring only off the shelf hardware (a PC and projector). We will also be making the hardware used in the show – a projector powerful enough to compete with theatrical lighting – available for other production companies to borrow and experiment with once Soonchild’s tour is complete.

This work was developed in partnership between Red Earth Theatre, The Mixed Reality Laboratory, The School of English and Department of Modern Languages and Cultures.

The project website is available here.

Soonchild will be performed in Nottingham at the Lakeside Arts Theatre in Nottingham on Sunday 24th November – more information can be found here.

AI Technologies for Allergen Detection and Smart Cleaning Food Production Workshop

In collaboration with the AI3 Science Discovery (AI3SD) and Internet of Food Things (IoFT) EPSRC Networks the RoboClean team ran a workshop in London on the 17th of October. The focus of the workshop was to discuss how digital technologies such as AI, sensors and robotics can be used for enhanced allergen detection and factory cleaning within food production environments. The workshop was well attended by a range of stakeholders from industry, academia and organisations such as the Food Standards Agency. The morning of the workshop had three speakers. Nik Watson from the University of Nottingham gave a talk on the future of factory cleaning. This talk covered a range of research projects from the University which developed new digital technologies to monitor and improve factory cleaning processes. The second talk was from AI3SD lead Jeremy Frey from the University of Southampton. Jeremy’s talk gave an introduction to AI and covered a range of new sensors which could be used to detect the presence of allergens in a variety of food products and environments. The final talk was delivered by Martin Peacock from Zimmer and Peacock, a company who develop and manufacture electrochemical sensors. Martin gave an introduction to the company and the technologies they develop before demonstrating how there sensor could be connected to an iPhone and determine the hotness of chilli sauce. Martin’s talk finished by discussing how electro chemical sensors could be used to detect allergens within a factory environment. The afternoon of the workshop focused on group discussions on the following the four topics – all related to allergen detection and cleaning within food production:

  • Data collection, analysis and use
  • Ethical issues
  • Cleaning robots
  • Sensors

Each group had a lead, however delegates moved between tables so they could contribute to more than one discussion. At the end of the workshop the lead from each group reported back with the main discussion points covered by the delegates. The delegates on the ‘robotics’ table reported that robots would play a large role in the future of factory cleaning as they would free up factory operators to spend time on more complicated tasks. The group felt that the design of the robots was essential and discussed that new factories should also be designed differently to facilitate robot cleaning more easily. The group also thought that effective communication with the robot was a key issue which needed further research. The ‘sensors’ group reported that any new sensors used to detect allergens or levels of cleanliness would need to fit into existing regulations and practices, but would be welcomed by the industry, especially if they could detect allergens or bacteria in real-time. The ‘data’ group reported that there was a need for data standards relevant to industrial challenge and there was also a need for open access data to enable the development of suitable analysis and visualisation methods. The ‘ethics’ group discussed numerous key topics including, Bias, Uncertainty, transparency, augmented intelligence and the objectivity of AI.

HALFWAY TO THE FUTURE

The Smart Products Beacon is delighted to be supporting Halfway to the Future – a symposium at the Albert Hall Conference Centre in Nottingham on the 19th & 20th November, exploring the past, present and future of HCI and designed-based research and marking the 20th anniversary of the Mixed Reality Lab at the University of Nottingham.

The symposium will address a range of key themes with dedicated single-track panels, each anchored by prominent keynote speakers reflecting upon one of their influential works in light of subsequent developments and present concerns. This will be followed by presentations of current related research, short future-oriented provocations, and a panel discussion/Q&A. The symposium will also incorporate an exhibition of interactive works and a poster session.

Symposium programme

I-CUBE call for participants

We are looking for participants for the I-CUBE project’s first study, taking place at the School of Computer Science, this November on Jubilee Campus.

This initial call is for employees of the University and members of the public, more generally. We will make a separate call for student participants. All participants need to be 18 years old or over.

If you are interested in taking part please use this Doodle link: https://doodle.com/meetme/qc/8tbM005BB7 to select your appointment and participate in our study.

The study’s task is to instruct a trainee ‘robot’ to sort a pile of clothes into separate washing loads according to a detailed list of tasks. This is to examine human interactions in a prescribed situation. There is a short questionnaire-interview to complete after the task.

You will be both video and audio recorded while instructing and responding to the trainee ‘robot’ as well as audio-recorded for the interview.

The experiment is expected to take approximately 45 minutes of your time and you will be reimbursed with £10 worth of shopping vouchers.

I CUBE

I-CUBE is developing new methods to enable collaborative robots (co-bots) to learn in a more naturalistic manner, using sensors to interpret the actions, language and expressions of their human collaborators. Advanced algorithms for decision-making, combined with reinforcement learning techniques will enable more effective, productivity enhancing human-robot cooperation for shared tasks.  

Our first demonstrator project will show how a small industrial co-bot (a Universal Robots UR5) can be directed to learn how to sort laundry in preparation for washing, according to the human collaborators’ preferences, as given by natural language and gesture. Computer vision and machine learning techniques will be integrated within the demonstrator for gesture recognition, as well as recognition of the colour of the cloths and the baskets in which to place the items of clothing.  

We are currently preparing for our first study with the intention of capturing the language and gestures that humans use whilst directing a co-bot to sort laundry. To do this we will use a Wizard of Oz method where a human will fulfil the role of the co-bot ‘brain’ whilst being hidden from the participant. This will allow participants to express themselves naturally while the co-bot enacts their instructions correctly, or not. Errors in the co-bot’s responses are expected to elicit natural corrective reactions from the human. These natural language and gestures will provide a corpus for the co-bot to use in its learning as well as assist in improving the co-bots sense of its environment, objects in it and their relevance to i

Food Design for Future Dining (FD2)

Food in the digital era is radically transforming. The supply chain is reshaping, with distribution networks evolving and on-line retailers providing a significant and ever-growing part of the market. At the same time, consumers spend longer online, impacting their food preferences and consumption practices. How we leverage digital technology to deliver next generation food systems capable of delivering sustainable, healthy and safe foods remains an open question. A significant part of the work in using digital technologies around foods is currently focusing on smart precision-agriculture. But less well explored is the significant potential digital technologies have to radically reconfigure food supply chains, and the way consumers interact with foods.

Food Design for Future Dinging, or FD2, is exploring how digital technologies can be used to enhance the food consumption experience, by demonstrating prototypical hybrid foods – food stuffs that are created to provide a novel physical and digital eating experience, and that are enhanced by the inclusion of relevant provenance information. As we unpack the design space, we’re seeing that there is great potential, and complexity, in how food and data speak to a wide range of consumer values.

The core team brings together expertise in Mixed Reality and Human-Computer Interaction from Computer Science (Martin Flintham, Dimitri Darzentas, Emily Thorn), Food Process Engineering from Engineering (Serafim Bakalis), and Food Legislation and Compliance from Law (Richard Hyde). We’re also working with Blanch and Shock, a London based catering and design SME who are delivering cutting-edge culinary expertise.

We have four activities underway, broadly aligned with different elements of food consumption with the consumer in mind.

Enhancing the consumer experience with digital footprints. The French app https://yuka.io/en/ is making waves by using data to alert consumers as to whether products are good or should be avoided. Our first demonstrator is a digitally augmented cake gift, that uses augmented reality to provide two kinds of provenance to enhance the cake consumption experience. Functional or utilitarian information such as nutritional or allergen data, or what we might think of as hard provenance is, as with Yuka, presented by an app. We are also exploring soft provenance; rich narrative data such as stories about ingredients, how the cake was made and decorated and by whom, that speak to a broader set of values. Moving forward we’ve got our sights set on chocolate.

Enhancing product development. We’re building on some work we began with Qian Yang in UoN Sensory Sciences, which is looking to enhance the validity of consumer testing methodologies in the lab. By increasing the contextual validity of a lab study, we can reduce the failure rate of new products. Here we turn again to new immersive technologies to change the consumption experience but also allow naturalistic food consumption. Using Augmented Virtuality we’re taking consumer panels out of the lab into a variety of virtual environments to see how they can improve validity, or ultimately provide a radical new dining experience.

Enhancing food-as-a-service. Here we’re considering how food can be manufactured to be more relevant, more personalised or more value sensitive in the first place. We’ve finished designing a set of food development ideation cards that articulate not just flavour and physical properties, but also values, scenarios and contexts. The concepts that they embody are forming the basis for a technology probe into customised meal preparation, combined with a variety of non-soy miso recipes created by Blanch and Shock.

Finally, we are building a community of UoN academics in the broad area of “Smart Foods” and identifying key external partners to collaborate with. We will utilise existing UoN investment, e.g. through the Beacons to create a critical mass that would enhance collaboration and enable us to respond to future funding opportunities. In the immediate term the team presented a poster at the Connected Everything conference in June, and have also been demoing the work to various industry partners. In September, Serafim Bakalis spoke at ICEF 13, the International Congress of Food Engineering, making the proposition for a consumer focus on digital in the food domain.

AI3SD & IoFT AI Technologies for Allergen Detection and Smart Cleaning

This event is brought to you by the AI3SD (Artificial Intelligence and Augmented Intelligence for Automated Investigations for Scientific Discovery) and the IoFT (Internet of Food things) Networks.

As food allergies and intolerances are on the rise, allergen detection and awareness is becoming more critical than ever at all stages of the food production pipeline; from cleaning the factories and kitchens the food is produced in, to detecting allergens in food, right through to creating allergen free food in the future. Unsurprisingly research has turned to technological solutions to combat this issue. This workshop is centered around the usage of Artificial Intelligence in Allergen Detection and Smart Cleaning within Food Production; research areas that co-align between both AI3SD & IoFT. The workshop will begin with some thought provoking talks to report on the current state of affairs, and consider where we need to be going in the future. There are six main working group topics identified for this workshop, and talks will be given on the different aspects that need to be considered with respect to allergen detection and smart cleaning before we break into the working groups for more formal discussions. There are multiple sessions for the working group discussions, and so there will be opportunities to take part in as many group discussions as you wish. The workshop will be formally recorded and the suggestions for going forward will be captured in a position paper. Lunch will be provided and the workshop will end with networking drinks.

Programme

The programme for the day is as follows:

  • 10:00-10:30: Registration & Coffee
  • 10:30-10:45: Welcome from Professor Jeremy Frey & Professor Simon Pearson
  • 10:45-11:15: Smart Cleaning & Robots in Factories – Dr Nicholas Watson
  • 11:15-11:45: Speaker TBC
  • 11:45-12:15: TBC – Professor Jeremy Frey
  • 12:15-1300: Lunch
  • 13:00-13:15: Speaker TBC
  • 13:15-13:30: AI in Allergen Detection – Steve Brewer
  • 13:30-14:30: Working Group Discussions
  • 14:30-14:45: Coffee Break
  • 14:45-15:30: Working Group Discussions
  • 15:30-16:00: Working Groups Report Back, Decide on Next Steps
  • 16:00-17:00: Networking Drinks

Register here

Email – info@ai3sd.org
Twitter – @AISciNet
LinkedIn – https://www.linkedin.com/in/ai3sd
LinkedIn Interest Group – AI3 Science Network Interest Group

 

RoboClean update 11/6/2019

The RoboClean project is investigating the work of cleaning factory floors, and the potential for robotic cleaners to work alongside—and with—human operators to ensure factories meet the strict industry hygiene guidance. These robots will use the latest sensors to also detect the presence of food allergens, allowing factory managers to avoid cross-contamination of products, especially in batch-driven processes.

The project will deliver and evaluate an interactive connected platform to enable novel human-robot collaboration and IoT smart sensor data collection in food factories. See our prior blog post for more information about the project. In this post we would like to present an update of our progress.

We are engaging with local SMEs and multinational food manufacturers to understand more about the sorts of environments we envisage these technologies will be deployed. Through interviews, workshops, and factory visits we intend to explicate the requirements and challenges—both legal and socio-technical—for deploying robots to complex environments such as factories. These visits are now on-going and the outcomes of these will inform the project’s design work. This work is being led by Martin Porcheron in Computer Science.

Roberto Santos, from the University of Nottingham Digital Research Service (DRS), has joined the project and is collaborating with Carolina Fuentes from the Horizon Digital Economy Research Institute on the development of our demonstrator robot platform. This platform, when complete, will support the autonomous and manual management of robot teams as well as individual robots. We are also currently in the process of developing a number of elicitation studies to understand the language and sorts of commands factory workers would use to direct and coordinate robots. Our focus at this stage is to deliver a platform suitable to control one robot at a time, and this is already taking shape with elicitation studies supporting this development process. Brian Logan from the Agents Lab in Computer Science is working with the team to ensure the platform design is suited to our multi-agent collaboration goals that will be delivered in later stages of the project.

Ahmed Rady from the Faculty of Engineering has also recently joined the project and is developing the processes for the smart sensors to detect various allergens, including collecting data that will be vital for the detection of these allergens. One of the biggest challenges facing manufacturers is the cross contamination of allergens within the manufacturing environment, and cleaning is a critical step in preventing this. By deploying sensors with the robots, we will be able to detect and potentially prevent any food safety events before product leaves the factory.

Overall, the team is already working towards developing deliverables and is looking forward to a successful 2019.

Finally, the team will be presenting a poster at the ConnectedEverything 2019 conference in June, where we will be on hand to discuss the project’s objectives, approach, outcomes, and potential collaborations. We think this is a great opportunity to connect with potential partners in the manufacturing industry and look forward to seeing you there.

Written by Martin Porcheron

 

Halfway to the Future – A symposium in Nottingham, UK from 19th-20th November 2019

The Halfway to the Future symposium is a two-day event in the city of Nottingham, UK exploring the past, present, and future of HCI and design-based research. The symposium will take place on the 19th and 20th November 2019 at the Albert Hall Conference Centre.

The symposium will address a range of key themes with dedicated single-track panels, each anchored by prominent keynote speakers reflecting upon one of their influential works in light of subsequent developments and present concerns. This will be followed by presentations of current related research, short future-oriented provocations, and a panel discussion/Q&A. The symposium will also incorporate an exhibition of interactive works and a poster session. All papers will be peer reviewed under a double-blind process and some papers will be selected for panels while others will be invited to present their work in poster format. Call for papers is now open.

Take a look at the symposium Agenda.

If you would like to keep up to date with the symposium, register for updates here.

If you have any questions, please don’t hesitate to contact the organising committee. We are currently putting together an exciting programme of talks and demos, with all keynote speakers confirmed. We look forward to your submissions!

We would like to thank the University of Nottingham Faculty of Science and ACM SIGCHI for generously sponsoring the symposium.

Twitter: @httfsymposium