CDO EXCHANGE 2020 – KEY TAKEAWAYS

Given the circumstances we all need to face nowadays, CDO Exchange 2020 was organized last week in an adjusted online way. Regardless of the non-traditional approach, this still turned out to be an interesting forum for leaders active in the data domain (CDO – Chief Data Officers and others). The event was chaired by Bloor Research Brian Jones.

The event opened with a strong focus on data program fundamentals like stakeholder management and business case creation.  The second part was all about AI and data driven value creation including the necessary attention on the important data ethics topic.

Key Takeaways


  • No data program or initiative without a purpose. Sounds like basic but so true. It’s not the first time that that a data initiative is kicked off because it’s innovative but at the end doesn’t address any real business need.
  • In order to be successful you need to inform business leaders at all levels. Not just C-level but all leadership in your business stakeholder community. Of course, these business leaders also need to be open to listen. Reading tip: Also have see our post about MDM business case building.
  • Correctly translating business needs is important but also understanding the right priorities is key. What are my real burning data platforms?
  • In the AI, ML, …. Basically the data science context, we are happy to see that a substantial number of business leaders managed to gain understanding of the most important principles of data science. Pay the necessary attention to this in your organization and make sure that you also remove the data  & data science ‘language barrier’.
  • Don’t forget the company politics. In some organizations you will need to cope with individuals attempting to sabotage constructive change if it was ‘not invented here’ or ‘owned by us’. This point of attention was valid before, but unfortunately is still there and definitely present in the context of data programs.
  • Ethics is not about a choice, it’s an obligation. Data can be powerful and can potentially deliver huge value. Besides this potential, it also comes with a duty of care. In an era where customer centricity is vital, you need to make sure that your data management is objective, trustworthy and transparent to your customers and other stakeholders.
  • Ethics carry a cost. However the cost of not doing things right is much higher. Think about the overall and longer term reputationally and commercially cost.



Do you have a data management question or require some level of support with a data initiative in your organization? Feel to reach out to us and schedule a free sync session.



SUMMER READING TIP

Summer is here and the longer days it brings means more time available to spend with a ripping read. That’s how it ideally works at least. We selected 3 valuable books worth your extra time.

 

The Chief Data Officer’s Playbook

The issues and profession of the Chief Data Officer (CDO) are of significant interest and relevance to organisations and data professionals internationally. Written by two practicing CDOs, this new book offers a practical, direct and engaging discussion of the role, its place and importance within organisations. Chief Data Officer is a new and rapidly expanding role and many organisations are finding that it is an uncomfortable fit into the existing C-suite. Bringing together views, opinions and practitioners experience for the first time, The Chief Data Officer’s Playbook offers a compelling guide to anyone looking to understand the current (and possible future) CDO landscape.

Search on Google


Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility

Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility, the first book ever written on the topic of data virtualization, introduces the technology that enables data virtualization and presents ten real-world case studies that demonstrate the significant value and tangible business agility benefits that can be achieved through the implementation of data virtualization solutions. The book introduces the relationship between data virtualization and business agility but also gives you  a more thorough exploration of data virtualization technology. Topics include what is data virtualization, why use it, how it works and how enterprises typically adopt it. 

Search on Google


Start With Why

Simon Sinek started a movement to help people become more inspired at work, and in turn inspire their colleagues and customers. Since then, millions have been touched by the power of his ideas, including more than 28 million who’ve watched his TED Talk based on ‘Start With Why’ — the third most popular TED video of all time. Sinek starts with a fundamental question: Why are some people and organizations more innovative, more influential, and more profitable than others? Why do some command greater loyalty from customers and employees alike? Even among the successful, why are so few able to repeat their success over and over? 
 
People like Martin Luther King, Steve Jobs, and the Wright Brothers had little in common, but they all started with Why. They realized that people won’t truly buy into a product, service, movement, or idea until they understand the Why behind it.  ‘Start With Why’ shows that the leaders who’ve had the greatest influence in the world all think, act, and communicate the same way — and it’s the opposite of what everyone else does. Sinek calls this powerful idea The Golden Circle, and it provides a framework upon which organizations can be built, movements can be led, and people can be inspired. And it all starts with Why.

Search on Google


Summer Giveaways

We’re giving away 50 copies of ‘Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility’.  Want to win? Just complete the form and cross your fingers. Good luck!


Winners are picked randomly at the end of the giveaway. Our privacy policy is available here.

RABOBANK GIVES CUSTOMERS ANIMAL & PLANT NAMES TO ADDRESS GDPR REQUIREMENTS

The Dutch bank Rabobank has implemented a creative way of using customer data, without having to request permissions. If you are one of their customers and they use your data with internal tests to develop new services, there is a chance that you will get a different name. With special software data is pseudonymized and they do so with Latin plant and animal names.

Your first name will become i.e. Rosa arvensis, the Latin name of a forest rose, and your street name i.e. Turdus merula, the scientific name of a blackbird. It is a useful solution for the bank to be somehow in line with the General Data Protection Regulation (GDPR) that takes effect on the 25th of May. When developing applications or services, analyzing data or executing marketing campaigns based on PII (Personally Identifiable Information) type of data, companies require to have an explicit consent. In order to be able to do this after May and without getting your consent, the bank uses data masking / pseudonnymization techniques.

 

Explicit consent & pseudonymization

With the new privacy law the personal data of citizens are better protected. One of the corner stones of the GDPR is the requirement to get an explicit consent and linked to that the purpose. Even with a general consent, companies do not get a carte blanche to do whatever they want to do with your data. Organizations must explain how data is used and by whom, where they are stored and for how long (more info about GDPR). Companies can work around these limitations if they anonymize / pseudonymize this PII type of data because they can still use and valorize this data but without a direct and obvious link to you as a person. You as a person become unrecognizable but your data remains usable for analysis or tests.  


Why scientific animal and plant names?

‘You can not use names that are traceable to the person according to the rules, but suppose it is a requirement to use letters with names, you have to come up with something else,” explains the vendor that delivered the software. “That’s how we came up with flower names, you can not confuse them, but they look like names for the system. Therefore, it is not necessary for organizations to change entire programs to comply with the new privacy law”.° 

Note that data anonymization/ pseudonymization technology does not require you to use plant and animal names. Most of this type of implementations will convert real to fictitious names and addresses that even better reflect the reality and perhaps better also match the usage requirements (i.e. specific application testing requirements). Typically substitution techniques are applied where a real name is replaced with a another real name.

 

Take aways

Pseudonymization vs anonymization

Pseudonymization and anonymization are two distinct terms that are often confused in the data security world. With the advent of GDPR, it is important to understand the difference, since anonymized data and pseudonymized data fall under very different categories in the regulation. Pseudonymization and anonymization are different in one key aspect. Anonymization irreversibly removes any way of identifying the data subject. Pseudonymization substitutes the identity of the data subject in such a way that additional information is required to re-identify the data subject.  With anonymisation, the data is cleansed for any information that may be an identifier of a data subject. Pseudonymisation does not remove all identifying information from the data but only reduces the linkability of a dataset with the original identity (using i.e. a specific encryption scheme). 

 

Pseudonymization is a method to substitute identifiable data with a reversible, consistent value. Anonymization is the destruction of the identifiable data.

 


Only for test data management?

You will need to look into your exact use cases and determine what techniques are the most appropriate ones. Every organization will most likely need both. Here are some use cases that illustrate this: 


Use caseFunctionalityTechnique
Your marketing team needs to setup a marketing campaign and will need to use customer data (city, total customer value,  household context, …).Depending on the consent that you received, anonymization or pseudonymization techniques might need to be applied. Data Masking
You are currently implementing a new CRM system and have outsourced the implementation to an external partner.Anonymization needs to be applied. The data (including the sensitive PII data) that you use for test data management purposes will need to transformed to data that cannot be linked to the original.  Data Masking
You are implementing a cloud based business application and want to make sure that your PII data is really protected. You even want to prevent that the IT team (with full system and database privileges) of your cloud provider has no access to your data.Distinct from data masking, data encryption translates data into another form, or code, so that only people with access to a secret key or password can read it. People with access but without the key will not be able to read the real content of the data. Data Encryption
You have a global organization also servicing EU clients. Due to the GDPR, you want to prevent  your non-EU employees to access data from your EU clients.Based on role and location, dynamic data masking accommodates data security and privacy policies that vary based on users’ locations. Also data encryption can be setup to facilitate this. Data Masking
Data Encryption
Your have a brilliant team of data scientists on board. They love to crunch all your Big Data and come up with the best analysis. In order to do that, they need all the data you possibly have. A data lake also needs to be in line with what the GDPR specifies. Depending on the usage you may need to implement anonymization or pseudonymization techniques.Data Masking

 

Is Pseudenomization the golden GDPR bullet?

Pseudonomization or anonymization can be one aspect of a good GDPR approach. However, it is definitely not the complete answer and you also will need to look into a number of other important elements:

  • Key to the GDPR is consent and the linked purpose dimension. In order to manage the complete consent state you need to make sure that this information is available to all your data consumers and automatically applied. You can use consent mastering techniques such as master data management and data virtualization for this purpose.



  • Data Discovery & Classification

    The GDPR is all about protecting personal data. Do you know where all you PII type of data is located?  Data discovery will automatically locate and classify sensitive data and calculate risk/breach cost based on defined policies.


    Data Discovery & Classification

  • Data Register

    A data register is also a key GDPR requirement. You are expected to maintain a record of processing activities under your responsibility or with other words you must keep an inventory of all personal data processed. The minimum information goes beyond knowing what data an organization processes. Also included should be for example the purposes of the processing, whether or not the personal data is exported and all third parties receiving the data.

    A data register that is integrated in your overall data governance program and that is linked with the reality of your data landscape is the recommended way forward.




° Financieele Dagblad

Also in need for data masking or encryption?

Would you like to know how Datalumen can also enable you to use your data assets in line with the GDPR?

Contact us and start our Data Conversation.

EUROPEAN RETAILERS ARE MISSING 35% OF SALES DUE TO LACK OF PRODUCT INFORMATION

The holiday season is the most important sales moment of the year. Nevertheless, a Zetes study reveals that retailers miss about 35 percent sales due to products not being immediately available or the lack of product information.

A quarter of consumers leave a store without actually buying anything if they do not immediately see the product that they are looking for or if that is not immediately available. It is one of the conclusions of a market research conducted by supply chain expert Zetes. The study specifically focused on buyer behavior during the annual peak period between November and January and analyzed both physical and online retail. 120 European retailers and over 2000 consumers were interviewed for this study in the January 2018 timeframe.

Time is money

The study states that stores miss 35 percent of sales due to the unavailability of products. The main reason for this is the expectation of the customer: more than in the past, customers will simply leave a store if they do not immediately see a product and they do not bother to talk to a shop assistant. 

If customers do however approach a shop assistant, they expect to receive more information about the product availability within two minutes. A rather limited window of opportunity especially if you know that the study also calculated that 51 percent of shop employees need to go to a cash desk to obtain the necessary information, and that 47 percent also needs to check the warehouse to verify the availability of a product. Both actions cost time and time is very expensive during the peak period. The study also reveals that 62 percent of retailers do not have access to real-time product data.  

Return Management

Also deliveries and returns typically cause extra problems during these busy months. It is common knowledge that people tend to buy quicker when they are sure that they can possibly return a product. The processing of returned parcels also causes problems. 26 percent of retailers indicate that they are having problems during the peak, so that only 39 percent of the returned goods are available for sale within 48 hours.  

Conclusion

“A lack of visibility of data is the core of these sales problems during the holidays,” the report states. “Consumers want choices, and they want to be informed. Instead of a general “not available” message, a retailer has a much greater chance of securing sales by telling the customer that a product will soon be back in stock and delivered within three days or will be available for click & collect.” There is still a significant room for information management improvement with direct sales optimization as a result. 

More information
https://www.zetes.com/en/white-papers

Also in need for real-time product information?

Would you like to know how Datalumen can also enable you to get real-time product information?

Contact us and start our Data Conversation.



 

GARTNER SURVEY FINDS CHIEF DATA OFFICERS ARE DELIVERING BUSINESS IMPACT AND ENABLING DIGITAL TRANSFORMATION

By 2021, the CDO Role Will Be the Most Gender Diverse of All Technology-Affiliated C-level Positions.

As the role of chief data officer (CDO) continues to gain traction within organizations, a recent survey by Gartner, Inc. found that these data and analytics leaders are proving to be a linchpin of digital business transformation. 

The third annual Gartner Chief Data Officer survey was conducted July through September 2017 with 287 CDOs, chief analytics officers and other high-level data and analytics leaders from across the world. Respondents were required to have the title of CDO, chief analytics officer or be a senior leader with responsibility for leading data and/or analytics in their organization. 

“While the early crop of CDOs was focused on data governance, data quality and regulatory drivers, today’s CDOs are now also delivering tangible business value, and enabling a data-driven culture,” said Valerie Logan, research director at Gartner. “Aligned with this shift in focus, the survey also showed that for the first time, more than half of CDOs now report directly to a top business leader such as the CEO, COO, CFO, president/owner or board/shareholders. By 2021, the office of the CDO will be seen as a mission-critical function comparable to IT, business operations, HR and finance in 75 percent of large enterprises.” 

The survey found that support for the CDO role and business function is rising globally. A majority of survey respondents reported holding the formal title of CDO, revealing a steady increase over 2016 (57 percent in 2017 compared with 50 percent in 2016). Those organizations implementing an Office of the CDO also rose since last year, with 47 percent reporting an Office of the CDO implemented (either formally or informally) in 2017, compared with 23 percent fully implemented in 2016. 

“The steady maturation of the office of the CDO underlines the acceptance and broader understanding of the role and recognizes the impact and value CDOs worldwide are providing,” said Michael Moran, research director at Gartner. “The addition of new talent for increasing responsibilities, growing budgets and increasing positive engagement across the C-suite illustrate how central the role of CDO is becoming to more and more organizations.” 

Budgets are also on the rise. Respondents to the 2017 survey report an average CDO office budget of $8 million, representing a 23 percent increase from the average of $6.5 million reported in 2016. Fifteen percent of respondents report budgets more than $20 million, contrasting with 7 percent last year. A further indicator of maturity is the size of the office of the CDO organization. Last year’s study reported total full time employees at an average of 38 (not distinguishing between direct and indirect reporting), while this year reports an average of 54 direct and indirect employees, representing the federated nature of the office of the CDO design. 

Gartner CDO Survey Results

Key Findings

CDO shift from defense to offense to drive digital transformation

With more than one-third of respondents saying “increase revenue” is a top three measure of success, the survey findings show a clear bias developing in favor of value creation over risk mitigation as the key measure of success for a CDO. The survey also looked at how CDOs allocate their time. On a mean basis, 45 percent of the CDO’s time is allocated to value creation and/or revenue generation, 28 percent to cost savings and efficiency, and 27 percent to risk mitigation. 

“CDOs and any data and analytics leader must take responsibility to put data governance and analytics principles on the digital agenda. They have the right and obligation to do it,” said Mario Faria, managing vice president at Gartner. 

CDO are responsible for more than just data governance

According to the survey, in 2017, CDOs are not just focused on data as the title may imply. Their responsibilities span data management, analytics, data science, ethics and digital transformation. A larger than expected percentage of respondents (36 percent) also report responsibility for profit and loss (P&L) ownership. “This increased level of reported responsibility by CDOs reflects the growing importance and pervasive nature of data and analytics across organizations, and the maturity of the CDO role and function,” said Ms. Logan. 

In the 2017 survey, 86 percent of respondents ranked “defining data and analytics strategy for the organization” as their top responsibility, up from 64 percent in 2016. This reflects a need for creating or modernizing data and analytics strategies within an increasing dependence on data and insights within a digital business context. 

CDO are becoming impactful change agents leading the data-driven transformation

The survey results provided insight into the kind of activities CDOs are taking on in order to drive change in their organizations. Several areas seem to have a notable increase in CDO responsibilities compared with last year:

  • Serving as a digital advisor: 71 percent of respondents are acting as a thought leader on emerging digital models, and helping to create the digital business vision for the enterprise.
  • Providing an external pulse and liaison: 60 percent of respondents are assessing external opportunities and threats as input to business strategy, and 75 percent of respondents are building and maintaining external relationships across the organization’s ecosystem.
  • Exploiting data for competitive edge: 77 percent of respondents are developing new data and analytics solutions to compete in new ways.

CDO are diverse and tackling a wide array of internal challenges

Gartner predicts that by 2021, the CDO role will be the most gender diverse of all technology-affiliated C-level positions and the survey results reflect that position. Of the respondents to Gartner’s 2017 CDO survey who provided their gender, 19 percent were female and this proportion is even higher within large organizations — 25 percent in organizations with worldwide revenue of more than $1 billion. This contrasts with 13 percent of CIOs who are women, per the 2018 Gartner CIO Agenda Survey. When it comes to average age of CDOs, 29 percent of respondents said they were 40 or younger. 

The survey respondents reported that there is no shortage of internal roadblocks challenging CDOs. The top internal roadblock to the success of the Office of the CDO is “culture challenges to accept change” — a top three challenge for 40 percent of respondents in 2017. A new roadblock, “poor data literacy,” debuted as the second biggest challenge (35 percent), suggesting that a top CDO priority is ensuring commonality of shared language and fluency with data, analytics and business outcomes across a wide range of organizational roles. When asked about engagement with other C-level executives, respondents ranked the relationship with the CIO and CTO as the strongest, followed by a broad, healthy degree of positive engagement across the C-Suite. 


More info on our Advisory Services?

Would you like to know what Datalumen can mean to your CDO Office?

Have a look at our Services Offering
contact us and start our Data Conversation.


THE NEED FOR TOTAL DATA MANAGEMENT IN BIG DATA

The buzz about “big data” is here for a couple of years now.  Have we witnessed incredible results? Yes. But maybe they aren’t as impressive as previously believed they would be. When it comes down to Big Data, we’re actually talking about data integration, data governance and data security. The bottom line? Data needs to be properly managed, whatever its size and type of content. Hence, total data management approaches as master data management are gaining momentum and are the way forward when it comes down to tackling an enterprise’s Big Data problem.

Download the Total Data Management in Big Data infographic (PDF).

Data Integration:
Your First Big Data Stepstone

In order to make Big Data work you need to address data complexity in the context of the golden V’s: Volume, Velocity and Variety. Accessing, ingesting, processing and deploying your data doesn’t automatically happen and traditional data approaches based on manual processes simply don’t work. The reason why these typically fails is you because:

  • you need to be able to ingest data at any speed
  • you need to process data in a flexible, read scalable and efficient, but also repetitive way
  • and last but not least you need to be able to deliver data anywhere and with the dynamics of the ever changing big data landscape in mind, this is definitely a challenge

Data Governance:
Your Second Big Data Stepstone

A substantial amount of people believe that Big Data is the golden grail and consider it as a magical black box solution. They believe that you can just get whatever data in your Big Data environment and it miraculously is going result into useful information. Reality is somehow different. In order to get value out of your initative, you also need to actually govern your Big Data. You need to govern it in two ways:

Your Big Data environment is not a trash bin.

Key for success is that you are able to cleanse, enrich and standardize your Big Data. You need to prove the added value of your Big Data initiative so don’t forget your consumers and make sure you are able to generate and share trusted insights. According to Experian’s 2015 Data Quality Benchmark Report, organizations suspect 26% of their data to be inaccurate. Reality is that with Big Data this % can be even be 2 to 3 times worse.

 

Your Big Data is not an island.

Governing your Big Data is one element but in order to get value out of it you should be able to combine it with the rest of your data landscape. According to Gartner, through 2017, 90% of the information assets from big data analytic efforts will be siloed and unleverageable across multiple business processes. That’s a pity given that using Master Data Management techniques you can break the Big Data walls down and create that 360° view on your customer, product, asset or virtually any other data domain.

Data Protection:
Your Third Big Data Stepstone

With the typical Big Data volumes but also growth in mind, many organizations have limited to no visibility into the location and use of their sensitive data. However new laws and regulations like GDPR do require a correct understanding of the data risks based on number of elements like data location, proliferation, protection and usage. This obviously applies to traditional data but is definitely also needed for Big Data. Especially if you know that a substantial amount of organizations tend to use their Big Data environment as a black hole, the risk of having also unknown sensitive Big Data is real.

How do you approach this:

Classify

Classify your sensitive data. In a nutshell, data inventory, topology, business process and data flow mapping and operations mapping.

De-identify

De-identifies your data so it can be used wherever you need it. Think about reporting and analysis environments, think about testing, etc. For this purpose masking and anonymization techniques and software can be used.

Protect

Once you know where your sensitive data is located you can actually protect it through tokenization and encryption techniques. These techniques are required if you want to keep and use your sensitive data in the original format.



More info on Big Data Management?

Would you like to know what
Big Data Management can also mean for your organization?
Have a look at our Big Data Management section 
and contact us.


 

HOW TRADITIONAL BANKING COMPANIES COUNTER FINTECH WITH DATA GOVERNANCE?

The new digital environment as well as a tough regulatory climate force the financial industry to adapt its business model in order to meet the demands of investors, regulators and customers. Today we mainly want to address the aspects of customer experience that traditional bankers ought to reflect on copying – or even exceeding. Because actually, it is customer experience that could be the traditional bank’s biggest asset. By this we mean that traditional banks are a one-stop shop for a broad range of financial products and services. This could serve both as an advantage as well as a competitive weakness to FinTech startups. Many traditional banks are still organized into silos. With business lines for individual products and services that use separate information systems and do not communicate to one another.

To improve on the customer experience, banks must be able to analyze customer information (data) and make that data useful for both the business and the customers. This is basically what Fintech does. However, they first need to gather the data. Traditional banks with a good data governance program, already have those data. They should have an advantage and leverage that.

To counter the extreme effectiveness and customer experience brought by new Fintech startups, some financial institutions are already upping their tech game. They work on the improvement of the user experience, they provide more insightful data analysis and increase cybersecurity.

While these are all true and important for banks, we believe getting “insightful data” is a little underestimated. There’s no data insights without clean data. There’s no clean data without a strong governance.

Data governance is all about processes that make sure that data are formally managed throughout the entire enterprise. Data governance is the way to ensure that data are correct and trustworthy. Data governance also turn employees accountable for anything bad occuring to the company resulting from a lack of data quality.

The role of data governance in the bank of the future?

The bank of the future is tech- and data-driven. Today’s digital capabilities turn the customer journey into a personalized experience. The bank of the future is predictive, proactive and understand the customers’ needs. It’s some sort of “Google Now for Banking”, suggesting actions proactively.  The bank of the future is a bank for individuals, it’s personalized in the range of services and products it offers to the individual – based on in-depth knowledge and understanding of the customer. By having up-to-date and correct data, you can truly serve customers.The “Bank of the Future” positions itself as ‘the bank that makes you the banker’. It thrives on interaction and a deep knowledge of its customers through data mining.

As the existing banking model is unbundled, everything about our financial services experience will change. In five to ten years, the industry will look fundamentally different. There will be a host of new providers and innovative new services. Some banks will take digital transformation seriously, others will buy their way into the future by taking over challengers and some will lose out. Some segments will be almost universally controlled by non-banks; other segments will be better within the structural advantages of a bank. Across the board, consumers will benefit as players will compete on innovation and customer experience. This is only possible with solid multi-domain, cross-silo data management with a solid data governance program on top of it.