THE MARKETING DATA JUNGLE

Customer & household profiling, personalization, journey analysis, segmentation, funnel analytics, acquisition & conversion metrics, predictive analytics & forecasting, …  The marketing goal to deliver a trustworthy and complete insight in the customer across different channels can be quiet difficult to accomplish.

A substantial amount of marketing departments have chosen to rely on a mix of platforms going from CEM/CXM, CDP, CRM, eCommerce, Customer Service, Contact Center, Marketing Automation to Marketing Analytics. A lot of these platforms are best of breed and come from a diverse number of vendors who are leader in their specific market segment. Internal custom build solutions (Microsoft Excel, homebrew data environments, …) always complete this type of setup.

78% According to a Forrester study, although 78% of marketers claim that a data-driven marketing strategy is crucial, as many as 70% of them admit they have poor quality and inconsistent data.


The challenges

Creating a 360° customer view across this diverse landscape is not a walk in the park. All of these marketing platforms do provide added value but are basically separate silos. All of these environments use different data and the data that they have in common, is typically used in a different way. If you need to join all these pieces together, you need some magical super glue.  Reality is that none of the marketing platform vendors actually have this in house.

Another point of attention is your data scope. We don’t need to explain you that customer experience is the hot thing in marketing nowadays. However marketeers need to do much more than just analyze customer experience data in order to create real customer insight.

Creating insight also requires that the data that you analyze goes beyond the traditional customer data domain. Combining customer data with i.e. the proper product/service, supplier, financial, … data is rather fundamental for this type of exercises. This type of extended data domains is usually lacking or the required detail level is not present in one particular platform.

38% Recent research from  KPMG and Forrester Consulting shows that 38% of marketers claimed they have a high level of confidence in their data and analytics that drives their customer insights. That’s said, only a third of them seem to trust the analytics they generate from their business operations.


The foundations

Regardless of the mix of marketing platforms, many marketing leaders don’t succeed in taking full advantage of all their data. As a logical result they also fail to make a real impact with their data driven marketing initiatives. The underlying reason for this issue is that many marketing organizations lack a number of crucial data management building blocks that allow them to break out of these typical martech silos. The most important data capabilities that you should take into account are:

 

Capability

Description

Master Data Management (aka MDM)

Creating a single view or so called golden record is the essence of Master Data Management. This allows you to make sure that a customer, product, etc is consistent across different applications.

 

Business Glossary

Having the correct terms & definitions might seem trivial but reality is that in the majority of the organizations noise on the line is reality. However having crystal clear terms and definitions is a basic requirement to have all stakeholders manage the data in the same way and prevent conflicts and waste down the data supply chain.

 

Data Catalog

Imagine Google-like functionality to search through your data assets. Find out what data you have, what’s the origin, how and where it is being used.

 

Data Quality

The why of proper data quality is obvious for any data consuming organization. If you have disconnected data landscape, data quality is even more important because it also facilitates the automatic match & merge glue exercise that you put in place to come to a common view on your data assets.

 

Data Virtualization

Getting real-time access to your data in an ad hoc and dynamic way is one of the missing pieces to get to your 360° view in time and budget. Forgot about traditional consumer headaches such as long waiting times, misunderstood requests, lack of agility, etc.

 

 

We intentionally use the term capability because this isn’t a IT story. All of these capabilities have a people, process and technology aspect and all of them should be driven by the business stakeholders. IT and technology is facilitating.


The results

If you manage to put in place the described data management capabilities you basically get in control. Your organization can find, understand and make data useful. You improve the efficiency of your people and processes, and reduce your data compliance risks. The benefits in a nutshell:

  1. Get full visibility of your data landscape by making data available and easily accessible across your organization. Deliver trusted data with documented definitions and certified data assets, so users feel confident using the data. Take back control using an approach that delivers everything you need to ensure data is accurate, consistent, complete and discoverable.
  2. Increase efficiency of your people and processes. Improve data transparency by establishing one enterprise-wide repository of assets, so every user can easily understand and discover data relevant to them. Increase efficiency using workflows to automate processes, helping improve collaboration and speed of task completion. Quickly understand your data’s history with automated business and technical lineage that help you clearly see how data transforms and flows from system to system and source to report.
  3. Reduce data and compliance risks. Mitigate compliance risk setting up data policies to control data retention and usage that can be applied across the organization, helping you meet your data compliance requirements. Reduce data risk by building and maintaining a business glossary of approved terms and definitions, helping ensure clarity and consistency of data assets for all users.

42% of data-driven marketers say their current technology in place is out of date and insufficient to help them do their jobs. Walker Sands Communications State of Marketing Technology report.



Conclusion

The data you need to be successful with your marketing efforts is there. You just have to transform it into useable data so that you can get accurate insights to make better decisions. The key in all of this is getting rid of your marketing platform silos by making sure that you have the proper data foundations in place. The data foundations to speed up and extend the capabilities of your datadriven marketing initiatives.


Need help unlocking your marketing data?

Would you like to find out how Datalumen can also help you with your marketing & data initiatives?  Contact us and start our data conversation.

CHANGE & DATA GOVERNANCE – TAKE A LEAP FORWARD

A successful data governance initiative is based on properly managing the People, Process, Data & Technology square. The most important element of these four is undoubtedly People. The reason for that is that at the end it boils down to people in your organization to act in a new business environment. This always implies change so make sure that you have an enabling framework for managing also the people side of change. Prepare, support and equip individuals at different levels in your organization to drive change and data governance success.

Change & the critical ingredient for data governance success.


Change is crucial in the success or failure of a data governance initiative for two reasons:

1First of all you should realize that with data governance you are going to tilt an organization. What we mean by this is that the situation before data governance is usually a silo-oriented organization. Individual employees, teams, departments, etc are the exclusive owner of their systems and associated data. With the implementation of data governance you will tilt that typical vertical data approach and align data flows with business processes that also run horizontally through an entire organization. This means that you need to help the organization to arrive at an environment where the data sharing & collaboration concept  is the new normal.

2The second important reason is the so-called data governance heartbeat. What we see in many organizations is that there is a lot of enthusiasm at the start of a program. However, without the necessary framework, read also a change management plan, you run the fundamental risk that such an initiative will eventually die a silent death. People lose interest, no longer feel involved, no longer see the point of it. From that perspective, it is necessary to create a framework that keeps data governance’s heart beating.

How to approach change?


Change goes beyond training & communication. To facilitate the necessary changes, ChangeLab and Datalumen designed the ADKAR-based LEAP approach. LEAP is an acronym that stands for Learn, Envision, Apply & Poll. Each of these important steps help realize successful and lasting change.


Need help covering change in the context of your data initiatives?

Would you like to find out how Datalumen can also help you with your Data Governance initiative?  Contact us and start our data conversation.




CALCULATING DATA GOVERNANCE ROI

TOP 5 DATA GOVERNANCE MISTAKES & HOW TO AVOID THEM

The importance of data in a digital transformation context is known to everyone. Actually getting control and properly governing this new oil does not happen automatically. In this article we have summarized the top 5 Data Governance mistakes and also give you a number of tips on how to avoid them.

1. Data Governance is not business driven

Who is leading your Data Governance effort? If your initiative is driven by IT, you dramatically limit your chance of success. A Data Governance approach is a company-wide initiative and needs business & it support. It also needs support from the different organizational levels. Your executive level needs to openly express support in different ways (sponsorship but also communication). However this shouldn’t be a top down initiative and all other involved levels will also need to be on board. Keep in mind that they will make your data organization really happen.

2. Data Maturity level of your organization is unknown or too low

Being aware of the need for Data Governance is one thing. Being ready for Data Governance is a different story. In that sense it is crucial to understand the data maturity level of your organization.  

There are several models to determine your data maturity level, but one of the most commonly used is the Gartner model. Surveys reveal that 60% of organizations rank themselves in the lowest 3 levels. Referring to this model, your organization should be close (or beyond) the systematic maturity level. If you are not, make sure to first fix this before taking next steps in your initiative. You need to have these basics properly in place. Without this minimum level of maturity, it doesn’t really makes sense to take the next steps. You don’t build a house without the necessary foundations. 
3. A Data Governance Project rather than Program approach

A substantial amount of companies tend to start a Data Governance initiative as a traditional project. Think about a well-defined structure, the effort and duration is well known, the benefits have been defined, … When you think about Data Governance or data in general, you know that’s not the case. Data is dynamic, ever changing and it has far more touch points. Because of this, a Data Governance initiative doesn’t fit a traditional focused project management approach. What does fit is a higher level program approach in which you could have defined a number of project streams that focus on one particular area. Some of these streams can have a defined duration (i.e. implementation of a business glossary). Others (i.e. change management) can have a more ongoing character. 

4. Big Bang vs Quick Win approach

Regardless of the fact that you have a proper company-wide program in place, you have to make sure that you focus on the proper quick wins to inspire buy-in and help build momentum. Your motto should not be Big Bang but rather Big Vision & Quick Wins.

Data Governance requires involvement from all levels of stakeholders. As a result you need to make everyone clear what your strategy & roadmap looks like.

With this type of programs you need to have the required enthusiasm when you take your first steps. It is key that you keep this heart beat in your program and for that reason you need to deliver quick wins. If you don’t do that, you strongly risk losing traction. Successfully delivering quick wins helps you gain credit and support with future steps.

5. No 3P mix approach

Data Governance has important People, Process and Platform dimensions. It’s never just one of these and requires that you pay the necessary attention to all of them.

  • When you implement Data Governance, people will almost certainly need to start working in a different way. They potentially may need to give up exclusive data ownership … All elements that require strong change management.
  • When you implement Data Governance you tilt your organization from a system silo point of view approach to a data process perspective. The ownership of your customer data is no longer just the CRM or a Marketing Manager but all the key stakeholders involved in customer related business processes.
  • When you want to make Data Governance a success you need to make it as efficient and easy as possible for every stakeholder. This implies that you should also thoroughly think about how you can facilitate them in the best possible way. Typically, this implies looking beyond traditional Excel, Sharepoint, Wiki type solutions and looking into implementing platforms that support your complete Data Governance community.



Also in need for data governance?

Would you like to know how Datalumen can also help you get your data agenda on track?  Contact us and start our data conversation.

THE GDPR BUSINESS VALUE ROADMAP

Getting a good understanding of the requirements but also the opportunities and business value is not easy. We designed a GDPR business value roadmap to help you with this and also make you understand what capabilities you need to get the job done.  


1
2
3
4
1

  • How will you understand what in-scope data is used for, for what purpose and by whom?
  • How will you demonstrate how you’re aligning to the principles?
  • Is your approach mostly manual, using interviews, questionnaires & static documentation?
  • Is your approach inaccurate, time consuming, resource consuming, out-of-date –or all of the these?


2

  • Do you understand where in-scope data is across your organisation and how it is shared?
  • How will you demonstrate you understand the size & shape of the data problem across domains and data subjects?
  • Is your approach mostly manual, using interviews, questionnaires & static documentation?
  • Is this approach inaccurate, time consuming, resource consuming, out-of-date –or all of the these?

3

  • How will you capture, manage and distribute consents across channels and business units?
  • How will you demonstrate you have captured the lawfulness of processing across all in-scope data sources?
  • Do you have anything in place already? Or are you planning on extending existing preferences capabilities?

4

  • How will you put protections and controls around identified in-scope data?
  • Can you demonstrate you have relevant control over the relevant in-scope data?
  • Are you planning to manually apply controls? Or apply masking, deletion & archiving solutions as required?
  • Will this approach give you a holistic view around the protections & controls you have in place?





Complete the form and download this Datalumen infogram (A3 PDF).



The Datalumen privacy policy can be consulted here.

More info on our Advisory Services?

Would you like to know what Datalumen can also mean to your GDPR or other data governance initiatives?

Have a look at our GDPR or Data Governance
contact us and start our Data Conversation.



CDO EXCHANGE 2020 – KEY TAKEAWAYS

Given the circumstances we all need to face nowadays, CDO Exchange 2020 was organized last week in an adjusted online way. Regardless of the non-traditional approach, this still turned out to be an interesting forum for leaders active in the data domain (CDO – Chief Data Officers and others). The event was chaired by Bloor Research Brian Jones.

The event opened with a strong focus on data program fundamentals like stakeholder management and business case creation.  The second part was all about AI and data driven value creation including the necessary attention on the important data ethics topic.

Key Takeaways


  • No data program or initiative without a purpose. Sounds like basic but so true. It’s not the first time that that a data initiative is kicked off because it’s innovative but at the end doesn’t address any real business need.
  • In order to be successful you need to inform business leaders at all levels. Not just C-level but all leadership in your business stakeholder community. Of course, these business leaders also need to be open to listen. Reading tip: Also have see our post about MDM business case building.
  • Correctly translating business needs is important but also understanding the right priorities is key. What are my real burning data platforms?
  • In the AI, ML, …. Basically the data science context, we are happy to see that a substantial number of business leaders managed to gain understanding of the most important principles of data science. Pay the necessary attention to this in your organization and make sure that you also remove the data  & data science ‘language barrier’.
  • Don’t forget the company politics. In some organizations you will need to cope with individuals attempting to sabotage constructive change if it was ‘not invented here’ or ‘owned by us’. This point of attention was valid before, but unfortunately is still there and definitely present in the context of data programs.
  • Ethics is not about a choice, it’s an obligation. Data can be powerful and can potentially deliver huge value. Besides this potential, it also comes with a duty of care. In an era where customer centricity is vital, you need to make sure that your data management is objective, trustworthy and transparent to your customers and other stakeholders.
  • Ethics carry a cost. However the cost of not doing things right is much higher. Think about the overall and longer term reputationally and commercially cost.



Do you have a data management question or require some level of support with a data initiative in your organization? Feel to reach out to us and schedule a free sync session.



SUMMER READING TIP

Summer is here and the longer days it brings means more time available to spend with a ripping read. That’s how it ideally works at least. We selected 3 valuable books worth your extra time.

 

The Chief Data Officer’s Playbook

The issues and profession of the Chief Data Officer (CDO) are of significant interest and relevance to organisations and data professionals internationally. Written by two practicing CDOs, this new book offers a practical, direct and engaging discussion of the role, its place and importance within organisations. Chief Data Officer is a new and rapidly expanding role and many organisations are finding that it is an uncomfortable fit into the existing C-suite. Bringing together views, opinions and practitioners experience for the first time, The Chief Data Officer’s Playbook offers a compelling guide to anyone looking to understand the current (and possible future) CDO landscape.

Search on Google


Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility

Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility, the first book ever written on the topic of data virtualization, introduces the technology that enables data virtualization and presents ten real-world case studies that demonstrate the significant value and tangible business agility benefits that can be achieved through the implementation of data virtualization solutions. The book introduces the relationship between data virtualization and business agility but also gives you  a more thorough exploration of data virtualization technology. Topics include what is data virtualization, why use it, how it works and how enterprises typically adopt it. 

Search on Google


Start With Why

Simon Sinek started a movement to help people become more inspired at work, and in turn inspire their colleagues and customers. Since then, millions have been touched by the power of his ideas, including more than 28 million who’ve watched his TED Talk based on ‘Start With Why’ — the third most popular TED video of all time. Sinek starts with a fundamental question: Why are some people and organizations more innovative, more influential, and more profitable than others? Why do some command greater loyalty from customers and employees alike? Even among the successful, why are so few able to repeat their success over and over? 
 
People like Martin Luther King, Steve Jobs, and the Wright Brothers had little in common, but they all started with Why. They realized that people won’t truly buy into a product, service, movement, or idea until they understand the Why behind it.  ‘Start With Why’ shows that the leaders who’ve had the greatest influence in the world all think, act, and communicate the same way — and it’s the opposite of what everyone else does. Sinek calls this powerful idea The Golden Circle, and it provides a framework upon which organizations can be built, movements can be led, and people can be inspired. And it all starts with Why.

Search on Google


Summer Giveaways

We’re giving away 50 copies of ‘Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility’.  Want to win? Just complete the form and cross your fingers. Good luck!


Winners are picked randomly at the end of the giveaway. Our privacy policy is available here.

RABOBANK GIVES CUSTOMERS ANIMAL & PLANT NAMES TO ADDRESS GDPR REQUIREMENTS

The Dutch bank Rabobank has implemented a creative way of using customer data, without having to request permissions. If you are one of their customers and they use your data with internal tests to develop new services, there is a chance that you will get a different name. With special software data is pseudonymized and they do so with Latin plant and animal names.

Your first name will become i.e. Rosa arvensis, the Latin name of a forest rose, and your street name i.e. Turdus merula, the scientific name of a blackbird. It is a useful solution for the bank to be somehow in line with the General Data Protection Regulation (GDPR) that takes effect on the 25th of May. When developing applications or services, analyzing data or executing marketing campaigns based on PII (Personally Identifiable Information) type of data, companies require to have an explicit consent. In order to be able to do this after May and without getting your consent, the bank uses data masking / pseudonnymization techniques.

 

Explicit consent & pseudonymization

With the new privacy law the personal data of citizens are better protected. One of the corner stones of the GDPR is the requirement to get an explicit consent and linked to that the purpose. Even with a general consent, companies do not get a carte blanche to do whatever they want to do with your data. Organizations must explain how data is used and by whom, where they are stored and for how long (more info about GDPR). Companies can work around these limitations if they anonymize / pseudonymize this PII type of data because they can still use and valorize this data but without a direct and obvious link to you as a person. You as a person become unrecognizable but your data remains usable for analysis or tests.  


Why scientific animal and plant names?

‘You can not use names that are traceable to the person according to the rules, but suppose it is a requirement to use letters with names, you have to come up with something else,” explains the vendor that delivered the software. “That’s how we came up with flower names, you can not confuse them, but they look like names for the system. Therefore, it is not necessary for organizations to change entire programs to comply with the new privacy law”.° 

Note that data anonymization/ pseudonymization technology does not require you to use plant and animal names. Most of this type of implementations will convert real to fictitious names and addresses that even better reflect the reality and perhaps better also match the usage requirements (i.e. specific application testing requirements). Typically substitution techniques are applied where a real name is replaced with a another real name.

 

Take aways

Pseudonymization vs anonymization

Pseudonymization and anonymization are two distinct terms that are often confused in the data security world. With the advent of GDPR, it is important to understand the difference, since anonymized data and pseudonymized data fall under very different categories in the regulation. Pseudonymization and anonymization are different in one key aspect. Anonymization irreversibly removes any way of identifying the data subject. Pseudonymization substitutes the identity of the data subject in such a way that additional information is required to re-identify the data subject.  With anonymisation, the data is cleansed for any information that may be an identifier of a data subject. Pseudonymisation does not remove all identifying information from the data but only reduces the linkability of a dataset with the original identity (using i.e. a specific encryption scheme). 

 

Pseudonymization is a method to substitute identifiable data with a reversible, consistent value. Anonymization is the destruction of the identifiable data.

 


Only for test data management?

You will need to look into your exact use cases and determine what techniques are the most appropriate ones. Every organization will most likely need both. Here are some use cases that illustrate this: 


Use caseFunctionalityTechnique
Your marketing team needs to setup a marketing campaign and will need to use customer data (city, total customer value,  household context, …).Depending on the consent that you received, anonymization or pseudonymization techniques might need to be applied. Data Masking
You are currently implementing a new CRM system and have outsourced the implementation to an external partner.Anonymization needs to be applied. The data (including the sensitive PII data) that you use for test data management purposes will need to transformed to data that cannot be linked to the original.  Data Masking
You are implementing a cloud based business application and want to make sure that your PII data is really protected. You even want to prevent that the IT team (with full system and database privileges) of your cloud provider has no access to your data.Distinct from data masking, data encryption translates data into another form, or code, so that only people with access to a secret key or password can read it. People with access but without the key will not be able to read the real content of the data. Data Encryption
You have a global organization also servicing EU clients. Due to the GDPR, you want to prevent  your non-EU employees to access data from your EU clients.Based on role and location, dynamic data masking accommodates data security and privacy policies that vary based on users’ locations. Also data encryption can be setup to facilitate this. Data Masking
Data Encryption
Your have a brilliant team of data scientists on board. They love to crunch all your Big Data and come up with the best analysis. In order to do that, they need all the data you possibly have. A data lake also needs to be in line with what the GDPR specifies. Depending on the usage you may need to implement anonymization or pseudonymization techniques.Data Masking

 

Is Pseudenomization the golden GDPR bullet?

Pseudonomization or anonymization can be one aspect of a good GDPR approach. However, it is definitely not the complete answer and you also will need to look into a number of other important elements:

  • Key to the GDPR is consent and the linked purpose dimension. In order to manage the complete consent state you need to make sure that this information is available to all your data consumers and automatically applied. You can use consent mastering techniques such as master data management and data virtualization for this purpose.



  • Data Discovery & Classification

    The GDPR is all about protecting personal data. Do you know where all you PII type of data is located?  Data discovery will automatically locate and classify sensitive data and calculate risk/breach cost based on defined policies.


    Data Discovery & Classification

  • Data Register

    A data register is also a key GDPR requirement. You are expected to maintain a record of processing activities under your responsibility or with other words you must keep an inventory of all personal data processed. The minimum information goes beyond knowing what data an organization processes. Also included should be for example the purposes of the processing, whether or not the personal data is exported and all third parties receiving the data.

    A data register that is integrated in your overall data governance program and that is linked with the reality of your data landscape is the recommended way forward.




° Financieele Dagblad

Also in need for data masking or encryption?

Would you like to know how Datalumen can also enable you to use your data assets in line with the GDPR?

Contact us and start our Data Conversation.

EUROPEAN RETAILERS ARE MISSING 35% OF SALES DUE TO LACK OF PRODUCT INFORMATION

The holiday season is the most important sales moment of the year. Nevertheless, a Zetes study reveals that retailers miss about 35 percent sales due to products not being immediately available or the lack of product information.

A quarter of consumers leave a store without actually buying anything if they do not immediately see the product that they are looking for or if that is not immediately available. It is one of the conclusions of a market research conducted by supply chain expert Zetes. The study specifically focused on buyer behavior during the annual peak period between November and January and analyzed both physical and online retail. 120 European retailers and over 2000 consumers were interviewed for this study in the January 2018 timeframe.

Time is money

The study states that stores miss 35 percent of sales due to the unavailability of products. The main reason for this is the expectation of the customer: more than in the past, customers will simply leave a store if they do not immediately see a product and they do not bother to talk to a shop assistant. 

If customers do however approach a shop assistant, they expect to receive more information about the product availability within two minutes. A rather limited window of opportunity especially if you know that the study also calculated that 51 percent of shop employees need to go to a cash desk to obtain the necessary information, and that 47 percent also needs to check the warehouse to verify the availability of a product. Both actions cost time and time is very expensive during the peak period. The study also reveals that 62 percent of retailers do not have access to real-time product data.  

Return Management

Also deliveries and returns typically cause extra problems during these busy months. It is common knowledge that people tend to buy quicker when they are sure that they can possibly return a product. The processing of returned parcels also causes problems. 26 percent of retailers indicate that they are having problems during the peak, so that only 39 percent of the returned goods are available for sale within 48 hours.  

Conclusion

“A lack of visibility of data is the core of these sales problems during the holidays,” the report states. “Consumers want choices, and they want to be informed. Instead of a general “not available” message, a retailer has a much greater chance of securing sales by telling the customer that a product will soon be back in stock and delivered within three days or will be available for click & collect.” There is still a significant room for information management improvement with direct sales optimization as a result. 

More information
https://www.zetes.com/en/white-papers

Also in need for real-time product information?

Would you like to know how Datalumen can also enable you to get real-time product information?

Contact us and start our Data Conversation.