Professor Steve Benford explains the Smart Products Beacon

The Smart products beacon is tackling two big questions. What are smart products? And how are they made?

A smart product is one that uses digital technologies and especially personal data to become more adaptive, personalised and valuable. It captures data throughout its lifetime – through both manufacture and use – and uses this to adapt itself to consumers. In so doing it blends aspects of goods, services and experiences, the three dominant product logics from economics and business into new forms. Sounds a bit abstract? Let’s take an example .…

There was a time when a car was made of rubber and iron. A car is also something you bought and owned. But those days are passing. A modern car is part software, containing an engine management system that can adapt its driving behaviour, and also hosts a variety of other services for navigation and entertainment. Some might say the modern car is really a mobile phone on wheels. For many consumers, a car is now also now a service that they lease rather than a good that they own.

But the transformation doesn’t end there. In a possible future world of autonomous cars, mobility itself may be the service, with consumers summoning vehicles on demand that adapt themselves on the fly to their preferences and history of previous travel. In this world, the physical units become interchangeable and it is the data that matters. You step into a car and it becomes yours by loading your personal profile and adapting itself to you. In those case the car is the data. As Neo learns when he visits the Oracle: “There is no spoon” (only data).

If smart products are made from new materials – personal data – then they are also made in new ways. Digitally native products such as social media are inherently co-created. Consumers either explicitly provide content in the form of the videos and photos they upload directly or implicitly provide it through their records of searches, views and likes. Smart products, even future cars, will be similarly co-created as both manufacturers and consumers engage with digital platforms and data-driven product life-cycles.

This raises a further important question – how can consumers trust future products with their personal data? How can they be sure that products are safe and secure and that they can retain control of their own data?

This vision of co-creating trusted smart products lies at the heart of our beacon. We think that it applies to all manner of products, from high value goods to consumer goods to digital media experiences. We’re looking forward to exploring the possibilities further over the coming years.

keynote talk by Sarah Brin, Strategic Partnerships Manager, Meow Wolf

The Smart Products Beacon is delighted to be supporting a keynote talk by Sarah Brin, Strategic Partnerships Manager, Meow Wolf at the Broadway on Monday 9th December, 6pm.

Sarah will speak about the creative challenges and questions surrounding the development of immersive experiences supported by emerging technologies.

An art historian and creative producer, Sarah specialises in previously unanticipated situations involving technology, the public, and organisational change/infrastructure. She’s created programs, exhibitions, and publications for organisations like Autodesk, SFMOMA, British Council, MOCA Los Angeles, the European Union and elsewhere. She cares about building just, sustainable and inviting things.

Sarah will cover key aspects of Meow Wolf’s creative process, recommendations for creatives working at the intersection of art and technology, and address questions regarding the responsibilities of cultural producers in times of dire political crisis.

Meow Wolf are a New Mexico-based arts and entertainment group creating immersive and interactive experiences that transport audiences of all ages into fantastic realms of story and exploration. This includes art installations, video and music production, and extended reality content.

Meow Wolf’s radical practice champions otherness, weirdness, radical inclusion and the power of creativity to change the world.

Book your tickets here.

Connected Everything II: Launch of Feasibility Studies Call

Connected Everything is the EPSRC funded network focussed on addressing the question “how do we support the future of manufacturing in the UK?”. In our first three years of funding, we supported the Manufacturing Made Smarter proposal development, including directly inputting into the definition of its key research challenges. We have now been awarded a further three years funding to deliver a network of networks which will accelerate multi-disciplinary collaboration, foster new collaborations between industry and academia and tackle emerging challenges which will underpin the UK academic community’s research in support of people, technologies, products and systems for digital manufacturing. Through a range of activities, including feasibility studies, networking, and thematic research, Connected Everything II (CEII) will bring together new teams within a multidisciplinary community to explore new ideas, demonstrate novel technologies in the context of digital manufacturing, and accelerate impact of research into industry.

As one of our initial activities, we are launching our first funding call for feasibility studies at this event in London on the morning of 28 November. Places are limited so please register early.

 

 

My internship on the RoboClean project – Jane Slinger

My internship with the RoboClean team involved developing a custom Alexa skill to control Neato vacuum cleaners by voice. This will enable further development to link with the voice interface if required, as the other aspects of the project involve web systems and multi-agent systems. I also helped run a study to find out how users would interact with the potential system in a lab environment.

I enjoyed the work as it was in an area that interested me and had some challenges in the code to overcome, leading me to learn more about how the systems worked to explore different solutions. It was nice to be able to build on skills about Alexa development learnt in my 3rd year project and include linking to the neato API through HTTP requests and a 3rd party library. This included setting up the Account Linking on the Alexa skill and then adapting some of the code from libraries to work with node.js on the backend instead of front-end JS-based methods that were already in place.

Designing the interactions with the robot and the user was also very interesting as I wanted to make sure that the system would prompt for the necessary information about the robot, and location to clean, without becoming annoying for the user.

The internship will help with my studies and future work as it has given me experience of working with a research team, building on areas I had some experience in as well as expanding to other technical skills that I hadn’t used before, and will be useful in the future.

Written by Jane Slinger

I-CUBE call for Participants

We are looking for participants for the I-CUBE project’s first study, taking place at the School of Computer Science, this November on Jubilee Campus.

This initial call is for employees of the University and members of the public, more generally. We will make a separate call for student participants. All participants need to be 18 years old or over.

If you are interested in taking part please use this Doodle link: https://doodle.com/meetme/qc/8tbM005BB7 to select your appointment and participate in our study.

The study’s task is to instruct a trainee ‘robot’ to sort a pile of clothes into separate washing loads according to a detailed list of tasks. This is to examine human interactions in a prescribed situation. There is a short questionnaire-interview to complete after the task.

You will be both video and audio recorded while instructing and responding to the trainee ‘robot’ as well as audio-recorded for the interview.

The experiment is expected to take approximately 45 minutes of your time and you will be reimbursed with £10 worth of shopping vouchers.

Charlotte Gray shares her experiences of working on RoboClean

I was introduced to the RoboClean project at Horizon whilst interning with the Advanced Data Analysis Centre. The project investigates the ways in which end-users interact with a robot vacuum cleaner and how a robot responds to user utterances; the aim being to inform its effective design and use within food factories.

I was invited to continue my internship for 5 more weeks within Horizon to help with the analysis of data collected through an elicitation study. Overall, this has been a really valuable and rewarding experience. Coming from an academic background in Sociology, I found working closely with researchers specialising in Computer Science exposed me to different research aims and challenges than I had previously encountered. This has been insightful for me as it has not only helped develop new skills in research analysis and interview techniques, but also applied the principles of a range of research methods gained during my academic studies over the past 2-years to cutting edge technological developments.

I have been responsible for transcribing participants’ audio data, analysing visual data, and creating a summary written report of participants’ interview responses. The focus of the report was on the benefits, limitations, and disadvantages experienced by users from the user-robot interactions. The attendance at a range of team meetings has also been beneficial in understanding interactions within a work environment, especially where individuals are working together from across a range of disciplines. Combined with the skills I have learned at workload prioritisation and management, this has made me confident to face future work situations and dilemmas. Additionally, I have written literature reviews on the topic of human-robot interaction. Being able to explore these new topics has also helped me see how issues explored in Sociology are becoming increasingly influenced by the world of technology, for example, how individuals’ day-to-day lives are mediated by the introduction of robots to the workplace. The multidisciplinary projects throughout Horizon have therefore also been interesting to work alongside, clearly showing the benefit of collaborative projects in producing innovative findings.

Contributing to a research project which is aiming for publication in a research journal has been hugely rewarding and exciting, and has made the idea of working in a similar environment after graduating a lot more persuasive.

Written by Charlotte Gray

Smart Products Beacon – Soonchild and Creative Captioning – Tailoring theatre productions for D/deaf audiences

For theatre audiences on a spectrum from D/deaf to hard of hearing, it is often difficult to keep up with performances. Even in cases where the performance is signed, or has captions, these accessibility additions often feel ‘tacked on’ and are typically located out of the action on stage, requiring audiences to share attention between the performance and the support. Working with Red Earth Theatre, a production company with a long history of “Total communication” in which actors sign on stage, we have been developing ways to deliver accessibility right into the heart of a performance.

Red Earth’s new show Soonchild, is touring the UK now, supported by funding from the University of Nottingham Smart Products Beacon, as well as the AHRC and the Arts Council. The show is captioned right across the set with beautiful, designed in, images, video and text delivered using new software developed at the Mixed Reality Laboratory.

The project team developed a software called ‘captionomatic’ which uses the principles of projection mapping to turn whole theatre sets into projection surfaces. While projection mapping itself is by no means a new concept, our approach has been to both simplify the process and to fit it into the wider theatre-tech ecology. Our innovation is to take a 3D model of the set – easily produced from the scale-model of the set pieces typically built for any performance, and project this onto the real set, using a simple system of point-matching to correctly align the physical set with its digital twin. Once that 3D model is in place, we are then able to project images, video, text and whatever else onto those set pieces respecting occlusion and creating an immersive canvas on which to display content.

We provide tools to read in the script from a word document, produce a compete set of captions, then generate the necessary cues which can be fired by QLab (or similar theatrical management software) to drive our system. Theatre designers need only edit the target locations and the look and feel of the text to create beautiful captions around their sets. Different sets of captions can be delivered for different audiences as necessary – so some shows may be fully captioned while others may only have key points highlighted. We know from our research that different audiences have different preferences for how captions are delivered, and our system allows theatre companies to quickly and confidently make adjustments – even between performances of the same show. Setup of the system in a new location takes only a few minutes, something that is absolutely necessary for touring productions.

More broadly, this new approach to projection mapping allows substantial creativity with digital media in theatre that extends beyond accessibility. Critically, it substantially reduces the technical barrier to entry of including projection-mapped media in a show.  Soonchild demonstrates this with some beautiful interactions between live actors and pre-recorded media – in this case shadow puppetry, projected on set as if live.

The software was demonstrated at an accessible theatre technology day at Wolverhampton Arena theatre, and plans for additional workshops and training are in the works. Despite being developed for Soonchild, the software has been designed to be easily applicable to many different types of shows and thus is open source and free, requiring only off the shelf hardware (a PC and projector). We will also be making the hardware used in the show – a projector powerful enough to compete with theatrical lighting – available for other production companies to borrow and experiment with once Soonchild’s tour is complete.

This work was developed in partnership between Red Earth Theatre, The Mixed Reality Laboratory, The School of English and Department of Modern Languages and Cultures.

The project website is available here.

Soonchild will be performed in Nottingham at the Lakeside Arts Theatre in Nottingham on Sunday 24th November – more information can be found here.

AI Technologies for Allergen Detection and Smart Cleaning Food Production Workshop

In collaboration with the AI3 Science Discovery (AI3SD) and Internet of Food Things (IoFT) EPSRC Networks the RoboClean team ran a workshop in London on the 17th of October. The focus of the workshop was to discuss how digital technologies such as AI, sensors and robotics can be used for enhanced allergen detection and factory cleaning within food production environments. The workshop was well attended by a range of stakeholders from industry, academia and organisations such as the Food Standards Agency. The morning of the workshop had three speakers. Nik Watson from the University of Nottingham gave a talk on the future of factory cleaning. This talk covered a range of research projects from the University which developed new digital technologies to monitor and improve factory cleaning processes. The second talk was from AI3SD lead Jeremy Frey from the University of Southampton. Jeremy’s talk gave an introduction to AI and covered a range of new sensors which could be used to detect the presence of allergens in a variety of food products and environments. The final talk was delivered by Martin Peacock from Zimmer and Peacock, a company who develop and manufacture electrochemical sensors. Martin gave an introduction to the company and the technologies they develop before demonstrating how there sensor could be connected to an iPhone and determine the hotness of chilli sauce. Martin’s talk finished by discussing how electro chemical sensors could be used to detect allergens within a factory environment. The afternoon of the workshop focused on group discussions on the following the four topics – all related to allergen detection and cleaning within food production:

  • Data collection, analysis and use
  • Ethical issues
  • Cleaning robots
  • Sensors

Each group had a lead, however delegates moved between tables so they could contribute to more than one discussion. At the end of the workshop the lead from each group reported back with the main discussion points covered by the delegates. The delegates on the ‘robotics’ table reported that robots would play a large role in the future of factory cleaning as they would free up factory operators to spend time on more complicated tasks. The group felt that the design of the robots was essential and discussed that new factories should also be designed differently to facilitate robot cleaning more easily. The group also thought that effective communication with the robot was a key issue which needed further research. The ‘sensors’ group reported that any new sensors used to detect allergens or levels of cleanliness would need to fit into existing regulations and practices, but would be welcomed by the industry, especially if they could detect allergens or bacteria in real-time. The ‘data’ group reported that there was a need for data standards relevant to industrial challenge and there was also a need for open access data to enable the development of suitable analysis and visualisation methods. The ‘ethics’ group discussed numerous key topics including, Bias, Uncertainty, transparency, augmented intelligence and the objectivity of AI.

HALFWAY TO THE FUTURE

The Smart Products Beacon is delighted to be supporting Halfway to the Future – a symposium at the Albert Hall Conference Centre in Nottingham on the 19th & 20th November, exploring the past, present and future of HCI and designed-based research and marking the 20th anniversary of the Mixed Reality Lab at the University of Nottingham.

The symposium will address a range of key themes with dedicated single-track panels, each anchored by prominent keynote speakers reflecting upon one of their influential works in light of subsequent developments and present concerns. This will be followed by presentations of current related research, short future-oriented provocations, and a panel discussion/Q&A. The symposium will also incorporate an exhibition of interactive works and a poster session.

Symposium programme

I-CUBE call for participants

We are looking for participants for the I-CUBE project’s first study, taking place at the School of Computer Science, this November on Jubilee Campus.

This initial call is for employees of the University and members of the public, more generally. We will make a separate call for student participants. All participants need to be 18 years old or over.

If you are interested in taking part please use this Doodle link: https://doodle.com/meetme/qc/8tbM005BB7 to select your appointment and participate in our study.

The study’s task is to instruct a trainee ‘robot’ to sort a pile of clothes into separate washing loads according to a detailed list of tasks. This is to examine human interactions in a prescribed situation. There is a short questionnaire-interview to complete after the task.

You will be both video and audio recorded while instructing and responding to the trainee ‘robot’ as well as audio-recorded for the interview.

The experiment is expected to take approximately 45 minutes of your time and you will be reimbursed with £10 worth of shopping vouchers.