It has been one hell of a year – and we are beyond delighted to now be multi award winning in 2019! And to be shortlisted for three categories in the DataIQ Awards is the icing on the cake.
But, ultimately, the real winner is data.
With all the hype surrounding data – it’s the new oil, gold, best thing since sliced bread, etc – it is easy to lose sight of the real impact of using it intelligently. Well managed, maintained and respected data will drive better strategic decisions and deliver tangible value to businesses on a day to day basis.
To be honest some of it isn’t that sexy but has the potential to transform businesses and achieve outstanding results – and win awards!!!
Data quality is one of the bedrocks of good data management and an obvious first step towards getting the most value from data. The old adage of “rubbish in, rubbish out” has never been more relevant – or potentially costly. In these highly regulated times, all businesses need to be confident that their data is clean, accurate and complete (and compliant!). And with the technology available to manage data quality more efficiently and securely advancing almost daily, there really is no excuse.
It can be as ‘simple’ as understanding your customers better. If you know who your best customers are, what they like, and how you should be communicating with them, then your messages to them will be more personal and positively received (not just personalised). The right analytics can transform strategy and with dramatic results. And deeper insight projects can completely transform business structures and promote new and more productive ways of working – as the award-winning project we have delivered with Marie Curie UK, The Big SHIFT, so aptly demonstrates.
Using high quality third party data, from a credible supplier, can have a dramatic impact on marketing strategy – particularly when it comes to acquisition. If you understand who you best customers are (see actionable insight!) third party data can help you find more of them and engage with them in the right way – using the right channel, message and offer – to ensure more successful outcomes. As our client Titan has very successfully demonstrated.
A new marketing mix
We believe there is a new marketing mix in town – data, creativity and technology.
The companies that use the data they have to make informed decisions that drive both creativity and personalisation – and choose the right technology to put the consumer at the heart of everything they do – are in the best position to win.
As our multi-award winning year goes to show, get the balance of these right and the sky is the limit!
By Scott Logie, MD, Insight at REaD Group
I recently sat down to play a great new board game with my wife and two statistician friends (there was also lots of wine, food and great chat involved). The game is Borel: https://www.playborel.com. The basic concept of the game is fairly simple: through a series of experiments using dice, cards and coins, it seeks to find whether intuition beats statistical reasoning. Of course, there are conditions applied so that there isn’t really enough time to be too statistical, but it works as a concept.
As an example, if you were to roll three six-sided dice six times, will any consecutive numbers be rolled? Basic stats suggest this should be a no, but when we did this experiment the first two numbers rolled were one, and then we rolled again and got another one. We were so flabbergasted at this that we re-ran the experiment and immediately rolled two fives. A triumph for intuition it has to be said.
It’s probably no bad thing that we don’t get to play fast and loose with money in this way. In business, the most important thing is to ensure we have adequate data to make decisions and to increase the probability of those decisions being correct.
Retail’s gut feel
Imagine looking for a new store location, but not considering the road network, the parking availability, the demographics of those who shop in the area, their disposable income and the likelihood of them buying the products you sell. That would be unthinkable: except that not so long ago, siting locations for stores was done very much on instinct. It’s only been in recent years that all these factors have come to play a part, thus ensuring that there is every probability that a new store will be in the most successful location possible.
Recently I watched an interview with one of Sports Direct’s directors. He said that their decision over which failing store groups they bid to take over was based on gut feel. While retail has a notoriously strong reputation for gut feel, I’m also pretty certain they have a formula: a way to evaluate the potential in a business to remove as much risk as possible and optimise the likelihood of success.
Note that many of these phrases are statistical by nature: every probability, remove risk, optimise success, increase the likelihood. We use stats every day, without thinking about it. Can you guarantee success?, we are often asked. No, but we can increase the probability of it happening.
Probability in marketing
As marketers, we don’t make huge decisions about investments in new stores or which companies to take over on a daily basis. But we do get measured on the success or otherwise of our campaigns, and of course we try to ensure that we weight the odds in our favour as best we can. Every day we use probability to ensure that we are delivering the right message to the right person at the right time.
Outbound communications, for example, are all about maximising returns: contacting the most relevant individuals with the minimum investment. To do this we use profiles, models and segmentations to help us understand as much as we can about our targets, remove those least likely to respond and find those who are more likely to want to buy our products.
Online, there are different ways to find the most relevant targets. Sometimes it is left to machines to help us do this, but in the background are similar algorithms, finding people (or cookies of people) who look like they browsed the same sites as those who clicked through. Even the way we find these cookies uses statistics, using probabilistic matching to try and find the same person across numerous machines.
Making your marketing as good as it can be
As with all modelling, the more data we have, the more observations of an event, the more variables we can vary, then the better our decisions will be. Our mantra at REaD Group is that the more you know about an individual the better your marketing will be, which is why we are always looking for data to help us create a more complete picture of our customers’ customers. Sometimes that data is at the individual level and sometimes at the household or even the postcode they live in. In the end, though, it is all about probability and having a better chance of getting a response to a campaign.
One of the joys of Borel was that the number of observations was kept small, the number of variables was low and the time to make a decision was short. Hopefully by adding more data, building up more history and ensuring that more information is available, we can help our clients make better decisions by providing more information.
Incidentally, I won the game by the slimmest possible margin and my wife, who based everything on informed hunches, was right behind me. I’d love to think my stats background gave me the edge. But maybe we need to play again just to be sure.
This blog post originally appeared on Decision Marketing: https://www.decisionmarketing.co.uk/views/data-driven-decisions-are-better-than-a-hunch-right
By Scott Logie, MD, Insight at REaD Group
Many years ago, sometime back in the last millennium in fact, I joined what was then Bank of Scotland as head of a team called Customer Knowledge. We were part of the Strategic Marketing department and were responsible for helping the bank understand who their customers were and how better to sell to them. I think we were pretty successful; we built a life-stage based segmentation, embedded campaigns around it and saw response rates as high as 25% to some of our mail campaigns.
However, my objective when I joined had been to get on the board and be the bank’s first ever Chief Data Officer (CDO). In that, I totally failed. In my 5 years at the bank we grew data understanding, built a full data warehouse (probably a puddle rather than a lake), created a data quality programme and ensured data was at the heart of all customer communications. We did a lot of education and got great buy in from senior people, including the Treasurer of the Bank (the most senior executive possible). And yet, a board level data person would never have been considered.
Therefore, it is really heartening to see Chief Data Officers in many organisations. There are probably a number of things that have driven this change. First of all, the evolution of the use of data, and the importance of data, over time. As businesses have become more digital in all aspects of what they do, this has created more and more data and that then needs responsible people to look after and manage it. Businesses will hopefully then start to see that data is an asset.
I have had a couple of interesting discussions over the last week or so around valuing data. TFL for example, who make much of their data open for free to developers and app designers, have still put a value on that data. They know what the value is, even if it is something they offer out FOC.
Secondly, there are the native digital businesses, the ones that started digitally rather than undergoing an evolution. For these companies, data has always been at the heart of what they do. As such having someone as the data “owner” has always made sense. Without management and interrogation of the data these companies subsequently wouldn’t thrive and be successful. Often this has run hand in hand with the same person who runs the technology although these roles have then diversified over time.
Finally, there is the impact of GDPR. If data wasn’t being discussed at board level beforehand – and it should have been – then it is now. Suddenly it wasn’t just about data being an asset and having a value, it was also about risk. A mistake with your data could result in a massive fine, so let’s make sure someone’s got the responsibility of ensuring that doesn’t happen.
Data used to be a subset of marketing and/or IT and many tensions arose because of that. Now data is central to organisations and needs to sit alongside these disciplines as well as many others such as operations, HR and finance. The value of the data on customers, performance, staff, suppliers and so on means that the real owner of data now is the CEO and therefore having one person report in who is managing the control of the company’s data is vital.
In some ways, it was easier back before Y2K. The data was simpler, the volume of it a lot lower and the usage a lot less. The savvy businesses were those that saw the growth in data at that point and put someone in charge of the whole data estate. For them, the elevation of that person to CDO was straightforward. Those that fudged this decision and spread the responsibility around have had to react. In many cases the decision to appoint a CDO has still not been made.
My own view is that in the next 10 years this role will become more prevalent and be one of the essential roles around any balanced board table alongside Finance, Operations and HR. Data is such a vital tool for businesses to operate at their optimum capacity. Manage it well and see your profits rise and rise. Manage it poorly and not only will your competitors win but the downside risk of fines and brand exposure could be enormous.
There are lots of reasons why I would not want to be starting my career all over again but, if I was, having an ambition to be a CDO would definitely be on my list and maybe it would be more likely now than when I was starting out.
Marketers spend hours meticulously crafting the campaign message, creative and a compelling call to action. Then when it comes to the data to fuel the campaign – often the same attention to detail isn’t applied. Even the most creative campaigns can fail if the data is poor quality and inaccurate – containing gone-aways and deceased contacts or incorrect addresses for example.
Here are 5 really good reasons why data quality should be (at least) as important as the creative and CTA….
Keeps you on the right side of the GDPR data quality requirements
Keeping ALL of your data clean is also now law with the advent of the GDPR. Article 5.1d is explicit: you must keep data accurate and up to date or delete it!
Reduces the risk of brand damage – does your brand want to market to deceased contacts?
Marketing to the deceased is bad practice and bad news for your business, causing unnecessary distress to relatives and risking costly damage to your brand reputation. It is so easy to avoid by using a trusted data cleaning partner, so why risk it?
The better the quality of your data, the better it will perforM! good quality data will help you increase revenue, reduce costs and make better business decisions
The phrase “rubbish in, rubbish out” is well known and often used. It is also very true. Your data is a valuable, strategic asset and maintaining its accuracy and quality should be a priority across your business. It should not be seen as a one-off task but as an ongoing process of improvement. Research by The 451 Group, identified the top 5 benefits of good data quality as:
- Increased revenues
- Reduced costs
- Less time spent reconciling data
- Greater confidence in analytical systems
- Increased customer satisfaction
Your customers and prospects expect their data to be accurate
Recent research conducted by DataIQ and REaD Group confirmed that consumers expect the data held about them by brands they interact with to be accurate.
72% of consumers expect companies that hold their data should get it right every time or most of the time. But what they actually experience is a different story, with almost half of consumers stating that companies get their data wrong sometimes or more often than not! [GDPR Impact Series 2018: Accuracy and Relevance]
It has never been easier to achieve!
And while it has never been more important to your business to keep your entire database clean and accurate, it has also never been easier to implement and maintain.
With established and trusted data quality services such as GAS, TBR and GAS Reactive from REaD Group – available via a choice of flexible delivery methods to suit organisational and technology requirements, it is now achievable and affordable to optimise the accuracy of your data.
Download our handy infographic below!
In an attempt to inject some lightheartedness into GDPR (no easy feat!) we thought we’d have a go at addressing some of the regulation’s key changes…by reappropriating Dua Lipa’s recent hit, ‘New Rules’.
I’m sure Ms. Lipa never envisioned her song being used in such fashion, and might well be appalled… Anyway, let’s delve into these new rules in a bit more detail.
One – Do pick up the phone, but if they’re on TPS then leave them alone
It clearly states in guidance from the ICO that individuals are still able to be contacted via telephone using Legitimate Interest as a legal basis. Consent is not strictly needed. However, an LIA must be carried out which concludes that you have a legitimate interest in contacting said individual, and that they equally would have an interest in hearing from you. Likewise, it goes without saying – if they’re registered on TPS then put that phone down.
Two – Don’t let bad data in, you must do your due diligence
Three – You must clean and amend, or you’re only gonna wake up with a fine in the morning
Article 5(1)d is explicit about this – data must be kept up to date and accurate or be deleted. Simple as. Besides the obvious threat of a substantial fine from the ICO, perhaps more troubling for many businesses should be the potential for brand damage. Consumer expectations around data accuracy have never been higher.
Recent research conducted by REaD Group found that more than 70% of consumers expect their data to be accurate [Source: Accuracy and Relevance – GDPR Impact Series 2018]
Continuing to market to deceased individuals and goneaways could have huge repercussions and lead to losing loyal customers. Keeping data up to date and accurate couldn’t be simpler and can be done real-time nowadays with Data as a Service (DaaS) solutions. So clean your data!
Don’t contact them – without a legal basis for pro-cessing
Whichever legal basis you choose for processing, once you have chosen it you must use it thereafter – there’s no going back. With that in mind, you might want to reconsider the misguided notion that consent is the be-all and end-all. It is often not the best basis to use. Direct Mail can be used under LI and is set to make a huge come-back – Amazon in the US (a famously online-only retailer) recently announced their intention to distribute a printed toy catalogue at Christmas time!
Respondents to MarketReach research confirmed that mail is more believable (87%), makes them feel more valued (70%) and creates a better impression of a company (70%).
While I await correspondence from Dua Lipa insisting that I never again use her songs to highlight changes in data protection law, be sure to follow the new rules – And if you don’t abide, the ICO might skin your hide! (Well, not really, but you get the idea!).
16th January 2017
By Jon Cano-Lopez, CEO at REaD Group
It’s fair to say that 2016 was another formative year for data. As the buzzword that was ‘Big Data’ has fast become yesterday’s term in the marketing lexicon, the industry has finally woken up to the array of challenges posed by the astonishing rate at which data volume is growing. The question around how to process, utilise and optimise this data has sparked intense discussion, debate and debacle over the past year. And whilst these broad questions will continue to dominate data conversations in 2017, some key issues will undoubtedly emerge throughout the course of the year. Here, we take a look at the top four data themes set to play an important part in 2017.
It may stand to reason that the more data you have, the more accurate your insights will be. But believe it or not, the explosion in volumes of data has led to increasing challenges when it comes to utilising such data optimally. Huge volumes of seemingly unrelated data is a fantastic opportunity for the marketeer, however it can be fraught with potential obstacles and traps if not used properly. New buzzwords such as Rich Data and Fast Data have emerged which in simplistic terms means the nuggets in the data are accessible quickly. We are also in danger of forgetting the basics. Data ‘ages’ or ‘decays’ (yes more buzzwords) at an alarming rate – in basic terms that means it becomes out of date. People change, they move house, they change life stage, they change preferences and needs and they unfortunately die. Without an up to date and accurate view, companies waste huge resources by targeting the wrong people, with the wrong messages, through the wrong channels. Data Hygiene, the process by which businesses can clean their customer databases through sophisticated algorithms and against much more robust data sets, will become even more important in 2017 as businesses seek to derive quality from quantity.
The outcome of the EU Referendum was not the only political moment to send shockwaves through the data industry in 2016. After finding itself in the eye of a tabloid storm towards the end of 2015, the charity sector has faced a year of dressing down by UK Government over its questionable use of consumer data. This has ultimately culminated in the development of a new Fundraising Preference Service, which will drastically limit the opportunities that charities have to reach out to prospects and raise funds. The government’s decision to weigh in on data use in the charity sector has set a dangerous precedent and calls into question which sector may be at the mercy of state intervention next. We’re likely to see the data industry engaging much more with the political sphere in 2017 in order to avoid a repeat scenario.
The General Data Protection Regulation (GDPR) is the widely-anticipated piece of European legislation scheduled for implementation in May 2018, and it will be a term on everyone’s lips in 2017. The new legislation is set to replace outdated data protection laws and every company in possession of EU citizen data will have to abide by the new rules. Britain’s decision to leave the EU has led to a sense of complacency in Britain’s approach to the GDPR. Many UK companies mistakenly believe Brexit will excuse them from this legislation. This is not the case and as a result preparation must start now. 2017 will be the year businesses will need to get their ducks in a row when it comes to data protection or risk fines of up to four per cent of annual global turnover.
Cold contact has long had its day. Even in the digital realm, consumers are growing increasingly fatigued by unsolicited marketing messages. Every marketer knows that a much more effective strategy is to ‘really’ target individuals properly. It’s an old phrase but ‘right person, right message, right time’ is still as true as ever. Just add ‘right channel’. Hopefully 2017 will be a year where businesses invest intelligently in collecting the right data through whatever applicable channel. Marketing to consumers that have already expressed an interest in your company’s product or service in some way, shape or form will always be more successful. As such, 2017 will be the year in which more businesses seek to generate data through a hands-up approach to data collection. This will ultimately provide more accurate and actionable insights.
It is clear that 2017 is set to be another exciting year in data. There will no doubt be more problems (sorry opportunities) as organisations contend with increasing volumes of data, impending regulations, and an unpredictable political landscape. But we can expect bigger and better things from the industry as we continue to enhance our capabilities, refine our practices and deliver greater results from data in 2017.
31st October 2016
By Scott Logie, MD, Insight at REaD Group
I recently attended an excellent conference hosted by The Insurance Network on the hot topic of customer engagement in the insurance sector. One of the interesting discussions at the roundtable focussed on purchasing data directly from the individual customer. The range of views in the room were intriguing, from “it would never happen” through to “I’d sell my data and for not very much”.
Before we explore this discussion in more detail, let’s just wind back a bit. What was universally agreed at the event was that developing an engaging relationship in the insurance sector is a tough task, and maybe one of the hardest sectors to make work. Insurance companies suffer from a lack of opportunities to build a relationship in terms of transactions, and the moments of truth are very low.
Much focus is put onto the claims process and making it as easy and seamless as possible, which is great, but only really affects between 5 and 10 per cent of customers – and probably the ones that the insurance companies don’t really want to keep. For the other 90-95% of the customer base, moments of engagement are few and far between.
At the same time, the acquisition market has become so competitive that existing customers will almost always consider looking at an aggregator to see what other deals are available.
The consequence is that customer data has become pretty valuable. Knowing the renewal dates for existing customers is really vital data for insurers. Knowing who they live with, their income, their likelihood to buy online, how many cars they have, and their hobbies can all help decide who to invest in and also which products to develop and promote. As a result, there is a large amount of money spent with third party data providers to ensure that external data is added to the limited internal data to enhance knowledge of the individual customers.
This brings us on to purchasing data directly from the consumer. It’s an area that has been looked at by different organisations in the past. Some small initiatives have been very successful, for example, incentivised gathering of opted in email addresses with prize draws and gathering renewal dates online with the offer of reduced premiums for multiple cars in a household. However, as far as I know, there has never been a concerted effort to create an ongoing data collection program, paid for with either hard cash or discounts to the consumer in the insurance sector. In many ways, loyalty cards in retail did exactly this. It wasn’t quite so explicit but clearly traded discounts for consumer information, which was used to build a picture of the customer and sell better to them. So why not do the same in insurance?
I guess the first challenge is around cost; how much discount could actually be offered on a car or home policy in exchange for some up to date information? £5? £10? Across a number of customers, this could become very expensive very quickly. Research has shown that we, as individuals, value our own data much higher than it could actually be sold for. However, as I was made aware at the event, some people would be happy to trade their data for much lower sums, as little as £1 or 50p even. Maybe some testing needs to be done to see what the value point is for different groups of customers.
Another point of discussion is around the validation of data sourced directly from the consumer. One benefit of buying data from a third party is that you can be assured that there has been a validation process through the comparison of multiple data sources with any spurious results ignored. How would a company gathering data directly know if a person supplying data actually provided the correct renewal date or that they did actually drive a Lamborghini?
So perhaps after all there is a reason why third party data suppliers exist. The collation, cleaning, validation and presentation of data isn’t straightforward and requires a robust, technical process. However, as the millennial generation become the consumer power base, they will definitely understand the need to trade data for services. As such, the time is right to be looking for the best model to ensure that individual companies get the data they value, available and permissioned on as many customers as possible. Some of that will be through third parties but more should also be getting gathered directly.
20th June 2016
Big Data is the phrase on so many decision makers’ lips at the moment. It’s a great term and has been influential in elevating the importance of data, especially around the boardroom table. But to a data expert it’s essentially meaningless.
Why? Because data has always been big. Data – in particular, customer transactional data – has always been a challenge for businesses to deal with. Back in 1992, we spent a massive (for us, at the time) £16,000 on four 1GB hard drives to hold as much information as we could for our clients. These days a portable 8GB drive on Google comes in at £2.99. Data is expansive; as technology evolves to store it, unless managed it will always fill the space available. Previously, we stored what was most relevant as the capacity wasn’t there to store everything and this filtering is still going on to a certain point. It’s not the mountain of data that’s important, it’s the spoonful of gold that we need to harness and act on quickly. There will always be more coming in than companies are capable of dealing with. Because just as the amount of data coming into a business gets larger, and our capacity to collect and analyse it increases, the new technologies and touchpoints to collect it also increase exponentially.
Big Data is big business; it has big implications. It is not, however, new, and it is leading businesses astray. Big Data is a big distraction from the real issue: Fast Data. The speed at which volumes of information come into a company now mean that the vast numbers of data scientists being hired at FTSE 500s around the world find themselves bogged down in the sheer onslaught, unable to find the really important facts in the torrent (the spoonful of gold). In all the fuss over Big Data, the opportunity from Fast Data is being ignored, and this is potentially ruinous. Fast Data is where the action is. Fast Data holds those incredibly valuable hair trigger moments where a company can pinpoint a customer about to leave, and stop them, making them a more loyal contact in the process. Fast Data is where people are browsing for cars online, it is dropped e-shopping baskets. It is the brief moment where decisions are influenced, and it lives only in the now.
So if a business is struggling to deal with its data already being Big, how can it be expected to deal with a 24/7 torrent of Fast Data too? The smart business keeps itself focused. It knows what its objectives are, and will look at the information coming in signalling the trigger points that will help to achieve these. This could be making sure they know when their customers move property so valued contacts are not lost, it could be understanding that a solid retention strategy hinges on spotting just when a customer is looking to defect and knowing what those information patterns look like. A smart business also knows when it may be time to look for help from someone outside its structure, independent from the infrastructure of siloed departments and data streams, that can pull the insightful whole together from many different, disparate parts. Fast Data cannot be slowed; it loses its power. It can however, be filtered and used. And businesses bewildered by Big Data can instead be fuelled and furthered by Fast Data.
10th June 2016
It is estimated that in the twelve months following an individual’s death, a whopping 110 items of marketing can be received in their name. With upwards of 580,000 people dying each year in the UK (based on ONS figures), this amounts to nearly 64 million pieces of unnecessary and damaging, direct mail landing on doormats and in inboxes annually.
Receiving this mail not only upsets the family and relatives of the deceased individual, but also increases the risk of brand damage for an organisation, and creates environmental harm and monetary waste.
To tackle this problem, REaD Group launched The Bereavement Register in 2000 – a continuously evolving marketing suppression list of deceased individuals in the UK. Since then, our team of dedicated TBR heroes work hard to ensure we the UK public is protected from this upsetting side to direct mail.
Since its origin, we have estimated that The Bereavement Register has prevented upwards of half-a-billion items of post being sent to the deceased!
“I understand first-hand the distress caused to bereaved families when direct mail is received after a person has died. I continued to receive mail address to my late wife, Sarah, months after her death. This was upsetting for me, but especially for our young children,” explains Mark Roy, Chairman. “I was in the perfect position to do something about it, and subsequently, TBR was born.”
The service is now used to screen over 70% of all direct mail sent in the UK, and is continuously growing.
TBR has a consumer website and phone number to call, as well as our FREEPOSR TBR Folders, which are distributed to UK Registrars, funeral directors, hospitals, hospices, police liaison officers, solicitors, charities and coroners.
To find out more, please visit www.thebereavementregister.org.uk.
2nd May 2016
The cars we drive are becoming more intelligent. In fact, the average family saloon has more computing power than Apollo 11, the ship that first took man to the moon! One major aspect of this intelligence is Telematics, the information your car gathers as it is driven.
Typically, Telematics is any integrated use of telecommunications and information, any data that is gathered and then transmitted for storage and use at a later date. In the automotive sector, Telematics is the more generic word used to describe the data that is generated by a vehicle and sent for analysis and management by the manufacturer, or other interested party. At present this data is primarily used for logging car performance for manufacturers but it is increasingly also being used to track drivers’ habits for insurance companies
In regular surveys, over 80% of people believe themselves to be good drivers. Sadly, most insurance companies don’t agree with this and many of us still feel that we are paying over the odds for the car we insure. With the EU Gender Directive implementation now complete, insurers are increasingly deploying technology within vehicles that records driving information in order to allow them to set premiums that reflect the driving style of motorists. Commonplace is the practice of installing tracking devices into vehicles to record information that should enable the insurer to diagnose the risk faced by that driver more accurately.
For example, higher speed (average or peak) might indicate greater risk; likewise greater distance covered relates to greater exposure. Other potential diagnostics include: time of day when the vehicle is being used; location; cornering at excessive speed; acceleration/deceleration and types of roads used.
The diagnostics from the Telematic black boxes can be gathered second by second for all journeys. This creates many advantages for insurers in tracking this data:
- Differential pricing
The EU Gender Directive may have created a way for female drivers to avoid large hikes in their car insurance premiums. Also, it could reduce the cost of young drivers’ insurance, allowing them to be rewarded for better driving habits.
- Driver benefits
No claims discounts can be worked out relative to mileage, rather than years, which is seen to be a more accurate exposure measure. Providing feedback to customers on their driving behaviour could encourage them to become better drivers, leading to safer roads.
Delays in reporting accidents to insurers would be reduced, and while more accurate reporting of accidents and possibly even the cause will make a big difference.
The cost of these devices has significantly reduced since they first appeared in the late 1990s. Yet there are still only about 15 insurers offering or trialling this type of product and until the more dominant insurers enter the market, public knowledge of the technology will remain low. Currently less than 1% of car insurance policies use Telematics and with estimates ranging between 10% and 80% by 2027, it is difficult to predict where the market is heading.
In addition, these black boxes to gather data are only being installed in the cars of younger drivers. There isn’t really any targeting being done to focus the use of such devices to higher risk groups, or indeed drivers with many previous claims. Clearly this is easier where the organisation has a previous relationship, or history with the driver. If that isn’t the case then external data should be getting used to help identify prospects who are likely to be careless or their lifestyle indicates they are more likely to be a higher risk.
From its earliest days, the insurance industry has been data-centric. In the past, insurance companies relied on historical data from policy administration solutions, claims management applications and billing systems. Today, the explosion of new data available is turning the insurance business model on its head. The growth in Telematics has had an especially large influence. Insurance is now a Big Data industry.
One thing is certain: Telematics data will play a key role in insurance pricing in the future.