Ethical Pitfalls and the Internet of Things June 13, 2017 | Alex Richardson

According to a growing number of industry experts, the next major shift in the consumer technology climate is going to come from the Internet of Things. While there are a lot of clear benefits to be drawn from a more connected world of devices, there are some clear ethical concerns to be navigated as well.

For the most part, these concerns fall into three categories: security, privacy, and agency. It isn’t clear how society will solve this issues, and we don’t presume to have all the answers. If we are going to invite billions more digital devices into our homes, stores, offices, and even streets, however, it is important for us to have a frank discussion on the subject and perhaps lay down some ground rules.

 

What is the Internet of Things?

First, a quick overview of the basics. This nebulous term describes a technology that is almost equally as nebulous. Because the cost and size of the tech needed to run a simple, internet-connected computer is getting so much smaller, we’re starting to put them into all sorts of ordinary devices. Office thermostats? Check. Utility meters? Check. Smart toasters? Check.

Accepting Aquicore’s inherent bias as an IoT company, the technology has some very useful applications for commercial real estate and for society at large. That said, even we question the need for a remote control kitchen device that still has to be manually loaded with bread.

As is the case with a lot of consumer technology, the part that you see is just the tip of the iceberg. The real value is in the cloud software that draws insights from IoT-enabled sensors or makes use of IoT-enabled actuators. For instance, connecting utility meters to the internet provides some value, because engineers no longer have to travel to meters on every floor, but there is far more value in using the data coming through those meters to start a demand response strategy or to catch a major leak before it causes water damage.

There is a dark side too, of course, and our ability to mitigate the risks present in this quickly growing technology will rely on our frank assessment of them now.

 

Security

The most well-publicized threat from IoT is security. In part because small, low-complexity devices are inherently vulnerable and in part because the industry has yet to get serious about digital security, there has already been a major attack on IoT devices. It is unlikely to be the last.

The Mirai botnet attack infected thousands of devices with a crude, but effective approach. It trawled the internet for IoT-enabled sensors, cameras, routers, and (probably) toasters that were still using the factory default password and took them over. These devices were then used to make repeated requests designed to overwhelm important U.S. websites in what is called a distributed denial of service (DDoS) attack.

As Kaveh Waddell noted in The Atlantic, the question of legal liability in complicated here. The criminals behind this and other hacks are criminally liable, of course, but they are hard to find and often just as hard to extradite from their countries of origin. It may be most effective for government regulators to set regulatory standards for IoT security and then hold manufacturers responsible when their devices are implicated.

The upside of this approach is that the people with the most immediate ability to fix the problem are incentivized to spend the time and money needed to do so. However, experts agree that more rules and liabilities would probably slow private-sector development in this area down. It also places the burden of these attacks onto parties that haven’t committed any wrongdoing. In this way, though, the approach is hardly unique. Regulators generally place the responsibility for credit card fraud on the companies that sell those cards for similar reasons.

 

Privacy

The IoT business community needs to address two sides of the privacy issue. First, there are privacy issues that are tied to security, and second, there is the question of what data device manufacturers should be allowed to collect.

Unsecured sensors and cameras can just as easily be used to violate their owners’ privacy as they can to carry out a DDoS attack. Already, there have been reports of terrifying, if not ultimately dangerous, cases in which hackers have taken over IoT-enabled baby monitors and used them to shriek at strangers’ babies over the internet. Access to other sensors in a hacking victim’s home or office could allow malicious actors to violate the physical or financial security.

The other side of the coin is the unprecedented amounts of data that tech companies around the world are currently hoovering up about their customers. Google, Apple, and their ilk already have thousands upon thousands of data points about each customer. With billions more sensors and devices in people’s environments, the amount of data generated by the average person will only grow. Whether companies should be able to collect all of this personal data, who should have access to it if they do, and what should be done with it are all questions that society will need to answer.

The solution to the first part of the privacy issue is to improve the digital security of IoT devices, but the second part is trickier. The data that is gathered from customers is used to target ads, which produces the revenue that keeps services that many customers take for granted, like Google and Facebook, free. The majority of consumers appear willing to trade at least some privacy for access to these services. On the other hand, allegations like those leveled at Cambridge Analytica, a group that allegedly used large data sets and “big nudging” to push the U.S. to elect Trump and Britain to exit the EU, raise the specter of malicious misuse of this private information. Society as a whole needs to have a hard conversation about just how much privacy it is willing to cede through internet-connected devices, ideally before there are 50 billion of them in the U.S.

 

Agency

The last issue is tied closely to the other two. As IoT-enabled devices become more closely integrated into our lives, they could end up robbing us of some degree of agency, either through “big nudging” or by making decisions for us that we aren’t aware are being made.

Of the two, “big nudging” is clearly the scarier issue. The idea is that statisticians can study large sets of consumer data and use the insights they draw to predict the thoughts, feelings, and behavior of individuals. These predictions could then be used to subtly manipulate society one person at a time, “nudging” it toward a specified outcome. There is some debate as to whether this approach is effective today, and claims like Cambridge Analytica’s alleged ability to predict individuals’ behavior better than a significant other after a certain number of data points have raised serious skepticism in the industry. Whether the technology is mature now or not, though, it may get there. When that happens, it will be very important to know who has access to the data that comes through all of those IoT-enabled devices.

Less “big brother,” but still potentially pernicious are the small, seemingly innocuous decisions that some IoT devices are already making for their owners. Writing for The Guardian, Adam Greenfield noted that the smart speaker Google Home makes restaurant suggestions and can translate those suggestions into reservations. There is potential here for minor advertiser manipulation, especially if paid placements aren’t clearly denoted, but just as significant is the app’s default use of the OpenTable app. Some consumers have ethical concerns about using the app, but even if they didn’t, Google Home demonstrates how decisions can be made for consumers in the name of eliminating friction.

As with privacy concerns, neither side of the agency issue is outright wrong. Proponents of “big nudging” would argue that their algorithms simply get political arguments in front of people who are receptive to hearing them and that their arguments are protected by free speech in any case. Similarly, while there is something uncomfortable in knowing that you have used an app without your knowledge, it does make the process of booking a table easier. Again, this is an area where we need to consider both sides of the issue as a society and come to some sort of consensus.

The Internet of Things is already here. Businesses and individuals are already finding significant value from using IoT-connected devices. As their numbers grow, though, as they are predicted to do, it will be important for us to make hard decisions about how we are going to deal with the ethical issues that they raise.

About The Author

Alex Richardson is a staff writer at Aquicore. He writes about green policy, energy efficiency, and innovation that affects commercial real estate. Alex.Richardson@aquicore.com.