I-CUBE is developing new methods to enable collaborative robots (co-bots) to learn in a more naturalistic manner, using sensors to interpret the actions, language and expressions of their human collaborators. Advanced algorithms for decision-making, combined with reinforcement learning techniques will enable more effective, productivity enhancing human-robot cooperation for shared tasks.  

Our first demonstrator project will show how a small industrial co-bot (a Universal Robots UR5) can be directed to learn how to sort laundry in preparation for washing, according to the human collaborators’ preferences, as given by natural language and gesture. Computer vision and machine learning techniques will be integrated within the demonstrator for gesture recognition, as well as recognition of the colour of the cloths and the baskets in which to place the items of clothing.  

We are currently preparing for our first study with the intention of capturing the language and gestures that humans use whilst directing a co-bot to sort laundry. To do this we will use a Wizard of Oz method where a human will fulfil the role of the co-bot ‘brain’ whilst being hidden from the participant. This will allow participants to express themselves naturally while the co-bot enacts their instructions correctly, or not. Errors in the co-bot’s responses are expected to elicit natural corrective reactions from the human. These natural language and gestures will provide a corpus for the co-bot to use in its learning as well as assist in improving the co-bots sense of its environment, objects in it and their relevance to i

Food Design for Future Dining (FD2)

Food in the digital era is radically transforming. The supply chain is reshaping, with distribution networks evolving and on-line retailers providing a significant and ever-growing part of the market. At the same time, consumers spend longer online, impacting their food preferences and consumption practices. How we leverage digital technology to deliver next generation food systems capable of delivering sustainable, healthy and safe foods remains an open question. A significant part of the work in using digital technologies around foods is currently focusing on smart precision-agriculture. But less well explored is the significant potential digital technologies have to radically reconfigure food supply chains, and the way consumers interact with foods.

Food Design for Future Dinging, or FD2, is exploring how digital technologies can be used to enhance the food consumption experience, by demonstrating prototypical hybrid foods – food stuffs that are created to provide a novel physical and digital eating experience, and that are enhanced by the inclusion of relevant provenance information. As we unpack the design space, we’re seeing that there is great potential, and complexity, in how food and data speak to a wide range of consumer values.

The core team brings together expertise in Mixed Reality and Human-Computer Interaction from Computer Science (Martin Flintham, Dimitri Darzentas, Emily Thorn), Food Process Engineering from Engineering (Serafim Bakalis), and Food Legislation and Compliance from Law (Richard Hyde). We’re also working with Blanch and Shock, a London based catering and design SME who are delivering cutting-edge culinary expertise.

We have four activities underway, broadly aligned with different elements of food consumption with the consumer in mind.

Enhancing the consumer experience with digital footprints. The French app https://yuka.io/en/ is making waves by using data to alert consumers as to whether products are good or should be avoided. Our first demonstrator is a digitally augmented cake gift, that uses augmented reality to provide two kinds of provenance to enhance the cake consumption experience. Functional or utilitarian information such as nutritional or allergen data, or what we might think of as hard provenance is, as with Yuka, presented by an app. We are also exploring soft provenance; rich narrative data such as stories about ingredients, how the cake was made and decorated and by whom, that speak to a broader set of values. Moving forward we’ve got our sights set on chocolate.

Enhancing product development. We’re building on some work we began with Qian Yang in UoN Sensory Sciences, which is looking to enhance the validity of consumer testing methodologies in the lab. By increasing the contextual validity of a lab study, we can reduce the failure rate of new products. Here we turn again to new immersive technologies to change the consumption experience but also allow naturalistic food consumption. Using Augmented Virtuality we’re taking consumer panels out of the lab into a variety of virtual environments to see how they can improve validity, or ultimately provide a radical new dining experience.

Enhancing food-as-a-service. Here we’re considering how food can be manufactured to be more relevant, more personalised or more value sensitive in the first place. We’ve finished designing a set of food development ideation cards that articulate not just flavour and physical properties, but also values, scenarios and contexts. The concepts that they embody are forming the basis for a technology probe into customised meal preparation, combined with a variety of non-soy miso recipes created by Blanch and Shock.

Finally, we are building a community of UoN academics in the broad area of “Smart Foods” and identifying key external partners to collaborate with. We will utilise existing UoN investment, e.g. through the Beacons to create a critical mass that would enhance collaboration and enable us to respond to future funding opportunities. In the immediate term the team presented a poster at the Connected Everything conference in June, and have also been demoing the work to various industry partners. In September, Serafim Bakalis spoke at ICEF 13, the International Congress of Food Engineering, making the proposition for a consumer focus on digital in the food domain.

AI3SD & IoFT AI Technologies for Allergen Detection and Smart Cleaning

This event is brought to you by the AI3SD (Artificial Intelligence and Augmented Intelligence for Automated Investigations for Scientific Discovery) and the IoFT (Internet of Food things) Networks.

As food allergies and intolerances are on the rise, allergen detection and awareness is becoming more critical than ever at all stages of the food production pipeline; from cleaning the factories and kitchens the food is produced in, to detecting allergens in food, right through to creating allergen free food in the future. Unsurprisingly research has turned to technological solutions to combat this issue. This workshop is centered around the usage of Artificial Intelligence in Allergen Detection and Smart Cleaning within Food Production; research areas that co-align between both AI3SD & IoFT. The workshop will begin with some thought provoking talks to report on the current state of affairs, and consider where we need to be going in the future. There are six main working group topics identified for this workshop, and talks will be given on the different aspects that need to be considered with respect to allergen detection and smart cleaning before we break into the working groups for more formal discussions. There are multiple sessions for the working group discussions, and so there will be opportunities to take part in as many group discussions as you wish. The workshop will be formally recorded and the suggestions for going forward will be captured in a position paper. Lunch will be provided and the workshop will end with networking drinks.

Programme

The programme for the day is as follows:

  • 10:00-10:30: Registration & Coffee
  • 10:30-10:45: Welcome from Professor Jeremy Frey & Professor Simon Pearson
  • 10:45-11:15: Smart Cleaning & Robots in Factories – Dr Nicholas Watson
  • 11:15-11:45: Speaker TBC
  • 11:45-12:15: TBC – Professor Jeremy Frey
  • 12:15-1300: Lunch
  • 13:00-13:15: Speaker TBC
  • 13:15-13:30: AI in Allergen Detection – Steve Brewer
  • 13:30-14:30: Working Group Discussions
  • 14:30-14:45: Coffee Break
  • 14:45-15:30: Working Group Discussions
  • 15:30-16:00: Working Groups Report Back, Decide on Next Steps
  • 16:00-17:00: Networking Drinks

Register here

Email – info@ai3sd.org
Twitter – @AISciNet
LinkedIn – https://www.linkedin.com/in/ai3sd
LinkedIn Interest Group – AI3 Science Network Interest Group

 

RoboClean update 11/6/2019

The RoboClean project is investigating the work of cleaning factory floors, and the potential for robotic cleaners to work alongside—and with—human operators to ensure factories meet the strict industry hygiene guidance. These robots will use the latest sensors to also detect the presence of food allergens, allowing factory managers to avoid cross-contamination of products, especially in batch-driven processes.

The project will deliver and evaluate an interactive connected platform to enable novel human-robot collaboration and IoT smart sensor data collection in food factories. See our prior blog post for more information about the project. In this post we would like to present an update of our progress.

We are engaging with local SMEs and multinational food manufacturers to understand more about the sorts of environments we envisage these technologies will be deployed. Through interviews, workshops, and factory visits we intend to explicate the requirements and challenges—both legal and socio-technical—for deploying robots to complex environments such as factories. These visits are now on-going and the outcomes of these will inform the project’s design work. This work is being led by Martin Porcheron in Computer Science.

Roberto Santos, from the University of Nottingham Digital Research Service (DRS), has joined the project and is collaborating with Carolina Fuentes from the Horizon Digital Economy Research Institute on the development of our demonstrator robot platform. This platform, when complete, will support the autonomous and manual management of robot teams as well as individual robots. We are also currently in the process of developing a number of elicitation studies to understand the language and sorts of commands factory workers would use to direct and coordinate robots. Our focus at this stage is to deliver a platform suitable to control one robot at a time, and this is already taking shape with elicitation studies supporting this development process. Brian Logan from the Agents Lab in Computer Science is working with the team to ensure the platform design is suited to our multi-agent collaboration goals that will be delivered in later stages of the project.

Ahmed Rady from the Faculty of Engineering has also recently joined the project and is developing the processes for the smart sensors to detect various allergens, including collecting data that will be vital for the detection of these allergens. One of the biggest challenges facing manufacturers is the cross contamination of allergens within the manufacturing environment, and cleaning is a critical step in preventing this. By deploying sensors with the robots, we will be able to detect and potentially prevent any food safety events before product leaves the factory.

Overall, the team is already working towards developing deliverables and is looking forward to a successful 2019.

Finally, the team will be presenting a poster at the ConnectedEverything 2019 conference in June, where we will be on hand to discuss the project’s objectives, approach, outcomes, and potential collaborations. We think this is a great opportunity to connect with potential partners in the manufacturing industry and look forward to seeing you there.

Written by Martin Porcheron

 

Halfway to the Future – A symposium in Nottingham, UK from 19th-20th November 2019

The Halfway to the Future symposium is a two-day event in the city of Nottingham, UK exploring the past, present, and future of HCI and design-based research. The symposium will take place on the 19th and 20th November 2019 at the Albert Hall Conference Centre.

The symposium will address a range of key themes with dedicated single-track panels, each anchored by prominent keynote speakers reflecting upon one of their influential works in light of subsequent developments and present concerns. This will be followed by presentations of current related research, short future-oriented provocations, and a panel discussion/Q&A. The symposium will also incorporate an exhibition of interactive works and a poster session. All papers will be peer reviewed under a double-blind process and some papers will be selected for panels while others will be invited to present their work in poster format. Call for papers is now open.

Take a look at the symposium Agenda.

If you would like to keep up to date with the symposium, register for updates here.

If you have any questions, please don’t hesitate to contact the organising committee. We are currently putting together an exciting programme of talks and demos, with all keynote speakers confirmed. We look forward to your submissions!

We would like to thank the University of Nottingham Faculty of Science and ACM SIGCHI for generously sponsoring the symposium.

Twitter: @httfsymposium 

Smart Products Beacon Gathering

It’s been a busy year for the Smart Products Beacon during which we’ve refined our research agenda and vision, launched demonstrator projects, developed our business case for the University and secured funding for initial projects, including EPSRC funding to establish a Centre for Doctoral Training for at least 65 new PhD students.

Today (8th April 2019), we are holding a one day gathering to learn more and explore how people can get involved.

The event commenced with an overview of RoboClean, Food Design for Future Dining (FD)2 and Industrial Co-bots Understanding Behaviour (ICUBE).

A breakout session led to some useful research-led discussion on emerging themes, including:

  • Process planning for highly-customised products
  • Social technological interactions and implications
  • Data-enabled smart optimization
  • Digital Technology, Manufacturing, and Productivity
  • Smart Musical Instruments

A session of contributed paper presentations followed  a short lunch break and poster session.  These included:

  • User-Experience Design for Future Smart Vehicles
  • Managing attractiveness and tensions in digitally enhanced business environments
  • Locating the Smart Product Beacon: understanding the place based-agenda in RCUK funding (or, why economic geography matters)
  • “Demonstrating a framework to investigate combined packing and scheduling problems”?
  • “Peeling away the layers: toward a metaphor of foam to analyse composed digital-physical products”
  • Physical-Digital Alignment

The day concludes with a second breakout session – an opportunity to address and plan key beacon activities for the coming year.

Industrial Cobots Understanding Behaviour (iCUBE)

Collaborative robots, or co-bots, are robots that work collaboratively with humans in a productivity – enhancing process, most often associated with manufacturing and/or healthcare domains. Despite the aim to collaborate, co-bots lack the ability to sense humans and their behaviour appropriately. Instead robots rely on physically mechanical, hierarchical, instructions given explicitly by the human instead of utilizing a more natural means to include pose, expression, and language, to use this to determine behaviour. In turn, humans do not understand how the robot makes its decisions.

Co-bots also do not utilise human behaviour to learn tasks implicitly, and advances in online and reinforcement learning are needed to enable this. In iCUBE we will address these issues by endowing industrial co-bots with the ability to sense and interpret the actions, language, and expressions of the person with whom they collaborate. We will show the new capabilities of such co-bots with a demonstrator to instruct robots how to sort laundry. A human will teach the co-bot to sort laundry by showing it what to do, telling it what to do, or by simply indicating their satisfaction with the co-bot’s action through their expression.

iCUBE is currently integrating the BlueMax human sensing component that will allow the co-bot to sense the user’s facial expressions and body gestures. Soon we will add to this Natural Language Processing (NLP) tools so that people can speak with the co-bots. Such tools will include Automatic Speech Recognition (ASR), dialogue management, and Text To Speech (TTS).

iCUBE is a Smart Products Beacon-funded project, bringing together researchers from the School of Computer Science, the Institute of Advanced Manufacturing and Human Factors Research Group of the Faculty of Engineering, Horizon Research Institute, and BlueSkeye AI Ltd.

Key people involved:

  • Michel Valstar, School of Computer Science
  • David Branson, Institute of Advanced Manufacturing
  • Sue Cobb, Human Factors Research
  • Mercedes Torres Torres, School of Computer Science/Horizon
  • Adam Zhou, School of Computer Science
  • Pepita Stringer, Horizon
  • Dominic Price, Horizon
  • Timur Almaev, BlueSkeye AI

The main laundry sorting study will happen after summer, starting in September, with results expected in March 2020. A series of demonstrator activities will be held in 2020 where people will be able to interact with the co-bot.

RoboClean: Human Robot Collaboration for Allergen-Aware Factory Cleaning

Human-Robot collaboration is expected to increase in the following years [Elprama, 2017]. The RoboClean project will investigate the potential of human-robot collaboration, integrated with IoT sensors for cleaning and allergen detection on a factory floor. We will be exploring, the impact of introducing a robot that can teamwork “hand to hand”, or better said  “hand to robo-hand” with a factory worker – focusing on assisting them rather than replacing their role.

RoboClean targets the food industry – supporting the cleaning and safety process in an industry where the presence of an allergen is a safety risk, and also given the relevance of this sectors annual contribution of £28 billion to the economy.

Early stages of the project involved a visit to a bread factory in Birmingham to learn about current work practices in food factory cleaning, and gain a better understand the social context of food factories. In addition, we built and evaluated a “Human-Robot Collaborative-Hub” – effectively the “brain”, combining a voice interface and a robot cleaner. This “hub” will store data on robot activity, to identify zones to be cleaned through voice interface control.  This activity enabled us to identify what functions and procedures (API) were required to control the robot.

The next stage of the project will involve designing and developing the architecture of the “Human-Robot Collaborative-Hub” as a bridge between cleaner robots’ and different user interfaces – to explore controlling robot interaction in different areas through voice interfaces, for example:  “Robot, clean around the round table” or “Robot, clean around the fridge”. We are also working on integrating a sensor capable to detect allergens, with the aim of directing the robot cleaner to specific locations using data from the “HR-Collaborative-Hub”.

Introducing our Demonstrator projects

We have adopted three demonstrator projects to kick of the Campaign.

Project are part funded by the University of Nottingham Smart Products Beacon of Excellence

RoboClean: Human Robot Collaboration for Allergen-Aware Factory Cleaning 

In food and drink manufacturing, a third of working time is spent cleaning, which significantly affects productivity and efficiency.  This project aims to understand and address the industry need for cleaning support technologies by developing and deploying human robot collaboration to assist in the cleaning of factories and detect the unwanted presence of allergens to prevent food safety events. 

 Food Design for Future Dining (FD)2

Traditionally “food design” has been an area of expertise for Chefs, where raw materials are combined and cooked or processed, resulting in the blending of multiple components to create a “dish”.  However, the food engineering involved in current culinary processes resembles processes taking place at universities, with extensive testing of exotic ingredients requiring  control  and  high levels of  precision. Food is also a highly regulated commodity where, in order to bring a food to market, regulatory requirements must be met and businesses must be able to support any claims made by reference to evidence.  This project will address a number of questions through creative technologies and  engineering research – designing and demonstrating prototypical digital foods that provide novel physical and digital eating experiences. 

Industrial Co-bots Understanding Behaviour (I-CUBE) 

Collaborative robots, or co-bots, are robots that work collaboratively with humans in a productivity- enhancing process – most often associated with manufacturing and/or healthcare domains. Despite the aim to collaborate, co-bots lack the ability to sense humans and their behaviour appropriately. Instead robots rely on physically mechanical, hierarchical, instructions given explicitly by the human instead of utilizing a more natural means to include pose, expression, and language, and utilize this to determine behaviour. In turn, humans do not understand how the robot makes its decisions.  This project will enable research in human-robot corporation and bring together the ‘know-how’ and ‘research interests’ of human factors work, automatic human behaviour analysis and machine learning. Based on a laundry sorting co-bot task, the project will investigate how humans and robots explain their tasks, cope with mistakes and guide each other to overcome (impending) failure.