Guest post by: Dr Brian Pickering of IT Innovation at the University of Southampton
I live in a small, Georgian listed house, and was recently forced into the 21st century when a central heating engineer sucked his teeth (as they do) but instead of selling me a new boiler (as they usually do) he convinced me to install a heating control system which, he assured me, would not only fix the problem he couldn’t seem to, but allow me to communicate with my boiler remotely (which it does). He didn’t just mean I could sit in the living room with a wireless thermostat, but really: from any web browser or tablet or smart phone, I can monitor what’s going on, change schedules, reduce the temperature if my son thinks no one is looking and turns it up. At last a good use of technology…
Just a minute though. If I can do all of this remotely, then presumably so could anyone else. Forget my central heating for a moment, I need to think about this. I’m used to being concerned about data privacy and protection. If I use social networks and easily available webmail, I should be mindful of what providers expect me to accept[1],[2]; I know about the concerns around care.data[3]. Certainly for medical records, concern has been publically discussed for some time[4]. More recently, though, it appears I should be thinking about other aspects of healthcare: the embedded devices that monitor my condition when training, or keep my heart beating, or release the right levels of insulin at appropriate times, are vulnerable to attack[5].
And all of this is against a background of our increasing use of sensors and embedded devices. Back in 2012, the number of non-PC devices connected to the Internet exceeded the number of PCs; and the number of household devices with embedded ICT capabilities is set to overtake all other IoT connected devices this year (2014) and be almost double that of any other device types by 2020. Sensors and tags in general will experience exponential growth at the same time. I really think it’s time to stop and regroup.
This is not just about technology robustness to misuse and external attack. This is really to do with how we perceive technology and whether we want it to be so pervasive. The big question is whether users really trust the technology, and if so why. The TRIFoRM (TRust in IT: Factors, metRics, Models) project is looking at just that. If we start with a small specific group of users, who either need technology to support them day-to-day or use it as a necessary adjunct to their work, we really want to establish what it is about technology they trust, if at all; why they are willing to trust it; and what makes them trust it at all. Is it because we are simply being forced to take ever greater risks because there are no alternatives? Are we relying on legal regulations? Or is there something inherent in design or implementation that makes it feel OK?
ACKNOWLEDGEMENTS
The Mirror and BBC News logos and headlines are used in accordance with the Section 29 of the Copyright Provisions 1988
The upper image is public domain, available at http://commons.wikimedia.org/wiki/File:ATM_Masalli.jpg?uselang=en-gb; the lower image http://commons.wikimedia.org/wiki/File:Next_Generation_Backscatter_Device.jpg is copyright of Tek84 (www.tek84.com). There is no implied or actual connection between the headline and the product depicted.
[1] http://europe-v-facebook.org/EN/en.html
[2] http://www.theguardian.com/technology/2013/aug/14/google-gmail-users-privacy-email-lawsuit
[3] https://medconfidential.org/how-to-opt-out/
[4] Rindfleisch (1997) Privacy, Information Technology and Health Care Communications of the ACM , Vol. 40, No.8
[5] Mansel & Kohno (2010) Improving the Security and Privacy of Implantable Devices New England Journal of Medicine, 362; 13, 1164-1167