Smart tech sprints forward, but the law lags behind
When a popular fitness tracker was invented, its designers could not have foreseen that it would play a role in a homicide carried out in the basement of a suburban Connecticut home. But it did. Enough, anyway, to charge a man with his wife’s murder based on data gathered from the device, such as the victim’s heart rate, steps taken, stairs climbed, calories burned and distance traveled — a timeline of her movements and condition. The case is currently awaiting trial.
Fitness trackers are one of many gadgets that make up the internet of things (IoT). The term applies to a rapidly expanding range of “smart” products, including home security systems, routers, DVRs, home appliances, thermostats, air- and water-quality monitors, satellite dishes, and biometric and medical devices that are directly or indirectly connected to the internet.
When people buy these products, they aren’t usually thinking about how the data collected might be used in a murder trial, sold for a profit or hacked into. But IoT technology is increasingly raising legal and ethical questions never before considered. Researchers at Arizona State University are exploring law and policy options for protecting users’ privacy and security without stifling innovation in the process.
A pacing problem
Early last fall, former California Gov. Jerry Brown signed into law a cybersecurity bill governing the internet of things. The law, which will take effect Jan. 1, 2020, calls for makers of smart devices to ensure that each gadget has security features that "protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure."
Some say the law is too vague, that it won’t offer sufficient protection to businesses and individuals. Others are wary of aggressive government regulation of the internet, including the internet of things.
Meanwhile, traditional law is hamstrung when it comes to regulating and adjudicating matters concerning IoT, especially security and privacy.
“Existing law is not going to work with these new technologies coming so quick and moving too fast for traditional law to keep up with,” says Gary Marchant, an ASU Regents’ Professor of law. “We call that the pacing problem.”
That is, the growth of the internet of things is sprinting along, bypassing long-standing principles of the internet, like what data are collected and how much, says Marchant, who directs the ASU Center for Law, Science and Innovation.
Traditional law is hamstrung when it comes to regulating and adjudicating matters concerning IoT, especially security and privacy.
Life’s too short
Traditionally, data were collected only as needed and kept as long as needed, says Marchant. “The idea now is that if you can collect and use data as minimally as possible, then you reduce privacy risk,” he says. “The internet of things turned that on its head.”
That’s because the internet of things is based on the premise that if you collect unlimited data, then you may find ways to use that data and do things that you wouldn’t have thought of before, he says.
In 2015, the Federal Trade Commission looked at the question of data collection and privacy. They recommended minimizing data collection but not severely regulating what data could be collected or how that data would be used.
The bottom line, says Marchant, is that the FTC concluded that the internet of things presents big opportunities and benefits. If traditional privacy principles were imposed, like ones that severely restrict data collection, the internet of things would be squashed.
“It’s always dangerous to come in with some heavy regulation early,” says Marchant. “Early on, you really don’t know where things are going to go.”
Nor do experts know where the internet of things is going when it comes to informed consent, another long-standing principle of the internet. Informed consent, delivered as a deluge of legalese, tells users who will be using their data and how that data will be used, and asks them to consent.
But when it comes to the internet of things, many gadgets don’t have a screen.
“There’s no place to consent,” says Marchant.
Even if the gadgets did have a screen so one could read the user agreement in all its shining legalese, life is too short. A recent study estimated that it would take an internet user 201 hours — more than eight entire days — to read every privacy agreement on every website that they visited in a given year.
A question of ownership
Even if users’ data are collected with consent and kept secure, there are other considerations at hand, like data ownership, intellectual property rights and legal liability. But what data users own and how much control of the data they have is yet to be determined.
Broadly speaking, there are two types of data. One type is the data that you own. For example, if you reveal your social security number to an organization on a website, you still own your social security number. But the other kind of data, such as your browsing history, is data that have been created about you. This is not necessarily data that you can call your own.
A recent study estimated that it would take an internet user 201 hours — more than eight entire days — to read every privacy agreement on every website that they visited in a given year.
Data are thought of as assets, as a commodity, and there are markets for it, says Michael Goul, associate dean for faculty and research at ASU’s W. P. Carey School of Business.
Data can be saved and repositioned and integrated with different data sets to be used in different ways to create leverage for businesses, explains Goul. He studies economic issues surrounding the seemingly countless questions that arise from data collection and sharing.
If data from two or more sources are comingled and used to create intellectual property, who owns that intellectual property and in what proportion? This intellectual property could be in the form of an analytical predictive model or a machine learning algorithm that was trained using the data. If two or more companies own parts of the training data, can they claim co-ownership of the algorithm?
Where does all the data go?
Additionally, what measures must be taken to secure the data and algorithms? What happens to data gathered in the past when it no longer applies to current conditions? Would an algorithm trained using that data need to die a natural death - a phenomenon known as model decay? Should data and algorithms go through a third party to assure all parties’ adherence to contractual obligations?
Take, for example, a twin-engine aircraft equipped with hundreds of sensors touching down at Phoenix Sky Harbor International Airport. When the jet finally pulls into a gate, it arrives with nearly a terabyte of data gleaned from its journey, data that include taxi time from the gate, departure and arrival time, radar flight data, current flying conditions, fuel consumption, anticipatory maintenance needs and route alterations.
Where does that data go? Who keeps that data, and who owns it? If the data is used to train an algorithm that predicts how long the aircraft might need to stay at the gate for boarding, maintenance and refueling, who owns the algorithm? The airline? The aircraft manufacturer? The government?
“It’s a wonderfully complex contractual set of business-to-business relationships,” says Goul.
“We have so many types of internet of things,” adds Marchant. “We have the smart city, the smart home, the smart factory, the smart schools, smart stadiums. They’re going to present different scenarios and different risk-benefit calculations. One law is not going to work well. So, what we need is innovative types of approaches.”
A hard look at soft law
One of those innovative approaches involves what’s known as soft-law mechanisms. These are quasi-legal instruments that are not necessarily legally binding, such as codes of conduct, best practices and private standards. Marchant and his colleagues have been weighing these ideas.
Where does that data go? Who keeps that data, and who owns it?
“How can we apply these soft-law mechanisms to the internet of things to try and provide some standards, to try and provide some principles and guidelines that companies should try to follow to provide protection and privacy without squashing the internet of things?” asks Marchant.
One approach that he has in mind is the formation of a governance coordinating committee, a private network that would include representatives from government, industry, privacy groups, human rights groups and the public. Such a heterogeneous group would come up with a code of conduct or statement of best practices.
“Companies might want to sign on to this type of agreement and voluntarily commit to it,” says Marchant. To be successful, it would have to represent the major sectors that care about the internet of things, he says. “Each of those sectors would have to have a seat at the table for it to work.”
Practically speaking, signed standards could be enforced through tort law, an area of law intended to provide redress for those who are harmed from others’ wrongful acts. Or insurance companies could require companies to sign on before they receive coverage. Or boards of directors could hold their management to these certain standards.
Despite the IoT’s vulnerabilities and the law’s quest to keep pace with this exploding electronic ecosystem, Goul is optimistic about the opportunities ahead.
“There’s huge value to business and society because the internet of things does streamline processes in many ways, allowing us to work faster, better, cheaper, and be better informed,” he says. “There’s a huge upside to this emerging technology in terms of productivity and the kind of things that benefit society.”
This article is part of a series about the internet of things. Read more:
- What is the internet of things?
- Smart tech sprints forward, but the law lags behind
- ASU IoT inventions
- How to protect your privacy
- The human side of smart tech (coming soon)