A convergence of mobile technology, the internet of things, and artificial intelligence is creating unprecedented big data opportunities. Susan McKiernan is a lawyer specialised in technology and counsel at Hogan Lovells

Big data analysis is not a new concept in the insurance sector, and insurers have long been looking at how to leverage data for better insights into customers and more accurate risk assessment. But current innovation – in particular, the convergence of mobile technology, the internet of things (IoT) and artificial intelligence – is creating unprecedented opportunities for businesses in the sector.

The ability to connect devices - such as telematics boxes in cars, sensors in homes, drones, healthcare devices and even tech-enabled clothing - with automated systems has allowed a wider pool of data to be accessed and analysed in ways not previously possible, including without human intervention. This enables businesses to assess and price risk in a different way. Technology is also impacting how customers are put on risk, with the use of algorithms to help streamline application and underwriting processes.

This combination is driving many of the current trends and disruption in the insurance sector. Insurance is being combined with IoT to enable risk prevention or reduction (e.g. telematics insurance to encourage safer driving or companies like Neos that are providing home insurance with sensors). We are seeing a more modular and flexible approach to insurance cover, with new companies like Trōv, Cuvva and Vrumi offering on-demand insurance products with instant cover. There is also the ability to offer more personalised cover based on profiling, e.g. Root car insurance provides a personalised quote after a period of monitoring driving habits, and Sherpa and Gen Re recently announced plans to insure an individual for all risks.

Such technological innovation is not without its challenges, however.

An obvious legal concern will be ensuring compliance with data protection legislation, and much has been written about the General Data Protection Regulation (the GDPR) and the stricter regime that will apply under it from 25 May 2018. Any business undertaking profiling - the automated processing of personal data and use of that data to evaluate certain personal aspects of an individual, including analysing or predicting their economic situation, health, personal preferences, behaviour or movements – or other decision-making without human involvement will need to take account of the specific rights and restrictions under the GDPR and ensure its processes and systems support them.

For example, at the point personal data is collected, the individual must be told if a decision will be made by automatic means and given information about the logic involved and the consequences for the individual. Subject to exceptions, an individual can object to a decision based solely on automated processing that significantly affects them – it is foreseeable that the automatic refusal of buildings or flood insurance, impacting the ability to secure a mortgage, could fall into this category. The business must have a process that allows the individual to ask for human intervention, express their point of view, seek an explanation of the decision and challenge it. This could mean having to explain and re-assess, for example, the decisions made by the complex algorithms underpinning a fully automated application process.

Another right that systems will have to accommodate is an individual’s right to receive their personal data in a ”commonly used and machine-readable format” that lets them take their data elsewhere. This ‘data portability’ right is fairly limited under the GDPR, only applying to data the individual has provided to the data controller and that is processed by automated means based on the individual’s consent or for the performance of a contract. However, recent guidelines from the Article 29 Working Party (the group that represents EU data protection regulators) suggest this will cover not only data that the individual has knowingly handed over but could include all “observed data”, such as location data and raw data transmitted by a wearable device.

A business must understand its supply chain so that it can tell a customer how data is being used and with whom data is being shared. This is not a new requirement, but supply chains for the collection and use of data from IoT can often be complicated, and increased obligations around transparency and accountability under the GDPR, coupled with substantial fines for failure to comply, means that the stakes are higher.

While the use of automation can bring objectivity and accuracy to data analysis, there are many reported examples of racist and sexist outcomes from the use of algorithms. Businesses need to be aware of the potential for bias in algorithms and ensure that there is no unlawful discrimination against individuals. Under the GDPR, data controllers using automated decision-making must take measures to prevent discrimination based on (among other things) health status, and should use ”appropriate mathematical or statistical procedures” for any profiling.

Data protection issues aside, there is the wider question about what data use a customer is willing to tolerate in exchange for a cheaper product or cover. Last year’s backlash against Admiral Insurance’s plans to price car insurance based on Facebook posts showed that a comfortable balance has still to be found: companies that are looking to push the boundaries too far risk a negative response from the market.

In addition, FCA-regulated businesses still need to comply with FCA rules, including giving customers the right information and meeting the FCA’s requirements to be fair, clear and not misleading. If data is being collected and processed via cloud services, which is often the case if a business has outsourced data collection and analytics, the FCA guidance on cloud services will be relevant also.

Other considerations around IoT should be reliability and liability. Security continues to be a major concern, with recent cyber-attacks demonstrating the extensive security vulnerabilities in many ‘smart’ devices. However, there are more basic concerns too, such as the battery life of a device and the reliability of connectivity. If insurance premiums are linked to the data from a telematics box or the outputs from a fitness tracker, what are the consequences for the policy if the data cannot be collected or is incomplete or unreliable? This needs to be considered and provided for at the stage of product development and drafting the terms and conditions.

Similarly, where an insurer partners with an organisation that is providing the connected devices or the operational infrastructure, who bears the risk if things do not work as planned? If a burst pipe in your connected home fails to trigger a notification, there could be any number of points of failure, such as a failure in the sensor on the pipe (or a component part of the sensor provided by a different manufacturer), in the internet connection, in the operating system that sends alerts to your mobile, or in the mobile app that displays the notification. It will be important to consider who is most likely to be responsible for loss arising, most able to control the risk and/or most able to mitigate the risk, and apportion contractual liability accordingly.

As with any innovation, a business needs to balance the commercial benefits with the risks and regulatory obligations. But as we look to a future that promises ever increased automation and connectivity, and the potential for insurers to access an even wider pool of data and smarter machines, we can only expect to see activity in this area increase.