Nowadays everyone agrees on the importance of the data governance. The understanding that it is critical to deliver trust, regulatory compliance and that it is a key element to deliver improved business has been well-accepted.

Successfully implementing data governance requires changes and investments in several domains:

  • People will need get trained and get support with their change. Don’t forget that certain co-workers will take up a new role (i.e. data steward), some people will need to share data ownership, etc. The necessary attention to change management is required.
  • Your business process will for sure benefit from optimizations but also require change. Important to realize in the data governance – process context is that you will become capable to tilt your organization from system or application silos to an approach where data is governed from a process point of view across your organizational landscape.
  • Also on a technology level you will need to optimize what you already have in place and most likely acquire data governance specific functionality that you currently lack. Think about a data governance capable business glossary, data catalog, etc.



Making it tangible

Understanding investments is one facet, getting value out of data governance is something else. How do you make all this tangible?

The most successful of our data governance clients focus on the next important areas:

Revenue Impact. 

Focus on identifying and addressing new business opportunities through data analytics & data science use. The only hard requirement is obviously that your data governance foundations should be in place. You might have the fanciest and most powerful analytical tooling available but without data governance, it remains like finding the magnetic north point with a faulty compass. Calculating the return of this comes down to considering the cost of the data related efforts and the potential business outcome. This is an exercise that will require input from all the involved stakeholders – both business and it. 


Business User Productivity.

Proper Data Governance is primarily an enabler. The business user area is a great example to illustrate this. Allowing them to move from finding data to applying data, directly results in increased their productivity and value to the organization. They are enabled to focus their core business instead of wasting huge amounts of time before they can start. A recent IDC study calculates the productivity gains will have an average value of €1572 per impacted user per year.

Operational Productivity.

Having better data quality, improved data controls, a connected data speech community, … will also generate operational benefits. No more waste of time due to ping-pong games caused by unclear roles and responsibilities, rework and churn due to dirty or incomplete data, …

Risk Mitigation. 

Data governance is key for compliance and audit purposes. Having visibility on data lineage, ownership and track and trace of data consumption is elementary for GRC teams. Having a proper data governance platform facilitates this and allows your teams to act more quickly and efficient. Governed automation vs ad-hoc manual effort is what this is all about. In this area IDC projects, in the same study, that organization can realize a benefit of €1280 per impacted user per year.

Besides operational efficiency, the direct cost elements of the overall risk can be calculated quiet easily. Think about the GDPR legislation where penalties are set up to €10 million, or 2% of the worldwide annual revenue of the prior financial year, whichever is higher.

Calculating the indirect costs elements is a bit more complicated. Think about the same GDPR example. The penalty issued for an infringement is clearly specified but imagine that your organization is active in a market vertical where reputation and being a trustworthy party is extremely important. In that type of scenario, getting a GDPR penalty will also have a big impact on your revenue and generate substantial costs to brush up your reputation. Calculating these costs requires organizational and market vertical specific insight.

 

With this info, you’re off to a good start. If you would require some practical advice and expertise, reach out to us.




Interested in the Data Governance?

Would you like to know how Datalumen can also help you with tour Data Governance initiative?  Contact us and start our data conversation.

Summer is here and the longer days it brings means more time available to spend with a ripping read. That’s how it ideally works at least. We selected 3 valuable books worth your extra time.

 

The Chief Data Officer’s Playbook

The issues and profession of the Chief Data Officer (CDO) are of significant interest and relevance to organisations and data professionals internationally. Written by two practicing CDOs, this new book offers a practical, direct and engaging discussion of the role, its place and importance within organisations. Chief Data Officer is a new and rapidly expanding role and many organisations are finding that it is an uncomfortable fit into the existing C-suite. Bringing together views, opinions and practitioners experience for the first time, The Chief Data Officer’s Playbook offers a compelling guide to anyone looking to understand the current (and possible future) CDO landscape.

Search on Google


Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility

Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility, the first book ever written on the topic of data virtualization, introduces the technology that enables data virtualization and presents ten real-world case studies that demonstrate the significant value and tangible business agility benefits that can be achieved through the implementation of data virtualization solutions. The book introduces the relationship between data virtualization and business agility but also gives you  a more thorough exploration of data virtualization technology. Topics include what is data virtualization, why use it, how it works and how enterprises typically adopt it. 

Search on Google


Start With Why

Simon Sinek started a movement to help people become more inspired at work, and in turn inspire their colleagues and customers. Since then, millions have been touched by the power of his ideas, including more than 28 million who’ve watched his TED Talk based on ‘Start With Why’ — the third most popular TED video of all time. Sinek starts with a fundamental question: Why are some people and organizations more innovative, more influential, and more profitable than others? Why do some command greater loyalty from customers and employees alike? Even among the successful, why are so few able to repeat their success over and over? 
 
People like Martin Luther King, Steve Jobs, and the Wright Brothers had little in common, but they all started with Why. They realized that people won’t truly buy into a product, service, movement, or idea until they understand the Why behind it.  ‘Start With Why’ shows that the leaders who’ve had the greatest influence in the world all think, act, and communicate the same way — and it’s the opposite of what everyone else does. Sinek calls this powerful idea The Golden Circle, and it provides a framework upon which organizations can be built, movements can be led, and people can be inspired. And it all starts with Why.

Search on Google


Summer Giveaways

We’re giving away 50 copies of ‘Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility’.  Want to win? Just complete the form and cross your fingers. Good luck!


Winners are picked randomly at the end of the giveaway. Our privacy policy is available here.

The importance of data in a digital transformation context is known to everyone. Actually getting control and properly governing this new oil does not happen automatically. In this article we have summarized the top 5 Data Governance mistakes and also give you a number of tips on how to avoid them.

1. Data Governance is not business driven

Who is leading your Data Governance effort? If your initiative is driven by IT, you dramatically limit your chance of success. A Data Governance approach is a company-wide initiative and needs business & it support. It also needs support from the different organizational levels. Your executive level needs to openly express support in different ways (sponsorship but also communication). However this shouldn’t be a top down initiative and all other involved levels will also need to be on board. Keep in mind that they will make your data organization really happen.

2. Data Maturity level of your organization is unknown or too low

Being aware of the need for Data Governance is one thing. Being ready for Data Governance is a different story. In that sense it is crucial to understand the data maturity level of your organization.  

There are several models to determine your data maturity level, but one of the most commonly used is the Gartner model. Surveys reveal that 60% of organizations rank themselves in the lowest 3 levels. Referring to this model, your organization should be close (or beyond) the systematic maturity level. If you are not, make sure to first fix this before taking next steps in your initiative. You need to have these basics properly in place. Without this minimum level of maturity, it doesn’t really makes sense to take the next steps. You don’t build a house without the necessary foundations. 

Source : Gartner (Oct 2017) https://www.gartner.com/newsroom/id/3851963

3. A Data Governance Project rather than Program approach

A substantial amount of companies tend to start a Data Governance initiative as a traditional project. Think about a well-defined structure, the effort and duration is well known, the benefits have been defined, … When you think about Data Governance or data in general, you know that’s not the case. Data is dynamic, ever changing and it has far more touch points. Because of this, a Data Governance initiative doesn’t fit a traditional focused project management approach. What does fit is a higher level program approach in which you could have defined a number of project streams that focus on one particular area. Some of these streams can have a defined duration (i.e. implementation of a business glossary). Others (i.e. change management) can have a more ongoing character. 

4. Big Bang vs Quick Win approach

Regardless of the fact that you have a proper company-wide program in place, you have to make sure that you focus on the proper quick wins to inspire buy-in and help build momentum. Your motto should not be Big Bang but rather Big Vision & Quick Wins.

Data Governance requires involvement from all levels of stakeholders. As a result you need to make everyone clear what your strategy & roadmap looks like.

With this type of programs you need to have the required enthusiasm when you take your first steps. It is key that you keep this heart beat in your program and for that reason you need to deliver quick wins. If you don’t do that, you strongly risk losing traction. Successfully delivering quick wins helps you gain credit and support with future steps.

5. No 3P mix approach

Data Governance has important People, Process and Platform dimensions. It’s never just one of these and requires that you pay the necessary attention to all of them.

  • When you implement Data Governance, people will almost certainly need to start working in a different way. They potentially may need to give up exclusive data ownership … All elements that require strong change management.
  • When you implement Data Governance you tilt your organization from a system silo point of view approach to a data process perspective. The ownership of your customer data is no longer just the CRM or a Marketing Manager but all the key stakeholders involved in customer related business processes.
  • When you want to make Data Governance a success you need to make it as efficient and easy as possible for every stakeholder. This implies that you should also thoroughly think about how you can facilitate them in the best possible way. Typically, this implies looking beyond traditional Excel, Sharepoint, Wiki type solutions and looking into implementing platforms that support your complete Data Governance community.



Also in need for data governance?

Would you like to know how Datalumen can also help you get your data agenda on track?  Contact us and start our data conversation.

The Dutch bank Rabobank has implemented a creative way of using customer data, without having to request permissions. If you are one of their customers and they use your data with internal tests to develop new services, there is a chance that you will get a different name. With special software data is pseudonymized and they do so with Latin plant and animal names.

Your first name will become i.e. Rosa arvensis, the Latin name of a forest rose, and your street name i.e. Turdus merula, the scientific name of a blackbird. It is a useful solution for the bank to be somehow in line with the General Data Protection Regulation (GDPR) that takes effect on the 25th of May. When developing applications or services, analyzing data or executing marketing campaigns based on PII (Personally Identifiable Information) type of data, companies require to have an explicit consent. In order to be able to do this after May and without getting your consent, the bank uses data masking / pseudonnymization techniques.

 

Explicit consent & pseudonymization

With the new privacy law the personal data of citizens are better protected. One of the corner stones of the GDPR is the requirement to get an explicit consent and linked to that the purpose. Even with a general consent, companies do not get a carte blanche to do whatever they want to do with your data. Organizations must explain how data is used and by whom, where they are stored and for how long (more info about GDPR). Companies can work around these limitations if they anonymize / pseudonymize this PII type of data because they can still use and valorize this data but without a direct and obvious link to you as a person. You as a person become unrecognizable but your data remains usable for analysis or tests.  


Why scientific animal and plant names?

‘You can not use names that are traceable to the person according to the rules, but suppose it is a requirement to use letters with names, you have to come up with something else,” explains the vendor that delivered the software. “That’s how we came up with flower names, you can not confuse them, but they look like names for the system. Therefore, it is not necessary for organizations to change entire programs to comply with the new privacy law”.° 

Note that data anonymization/ pseudonymization technology does not require you to use plant and animal names. Most of this type of implementations will convert real to fictitious names and addresses that even better reflect the reality and perhaps better also match the usage requirements (i.e. specific application testing requirements). Typically substitution techniques are applied where a real name is replaced with a another real name.

 

Take aways

Pseudonymization vs anonymization

Pseudonymization and anonymization are two distinct terms that are often confused in the data security world. With the advent of GDPR, it is important to understand the difference, since anonymized data and pseudonymized data fall under very different categories in the regulation. Pseudonymization and anonymization are different in one key aspect. Anonymization irreversibly removes any way of identifying the data subject. Pseudonymization substitutes the identity of the data subject in such a way that additional information is required to re-identify the data subject.  With anonymisation, the data is cleansed for any information that may be an identifier of a data subject. Pseudonymisation does not remove all identifying information from the data but only reduces the linkability of a dataset with the original identity (using i.e. a specific encryption scheme). 

 

Pseudonymization is a method to substitute identifiable data with a reversible, consistent value. Anonymization is the destruction of the identifiable data.

 


Only for test data management?

You will need to look into your exact use cases and determine what techniques are the most appropriate ones. Every organization will most likely need both. Here are some use cases that illustrate this: 


Use caseFunctionalityTechnique
Your marketing team needs to setup a marketing campaign and will need to use customer data (city, total customer value,  household context, …).Depending on the consent that you received, anonymization or pseudonymization techniques might need to be applied. Data Masking
You are currently implementing a new CRM system and have outsourced the implementation to an external partner.Anonymization needs to be applied. The data (including the sensitive PII data) that you use for test data management purposes will need to transformed to data that cannot be linked to the original.  Data Masking
You are implementing a cloud based business application and want to make sure that your PII data is really protected. You even want to prevent that the IT team (with full system and database privileges) of your cloud provider has no access to your data.Distinct from data masking, data encryption translates data into another form, or code, so that only people with access to a secret key or password can read it. People with access but without the key will not be able to read the real content of the data. Data Encryption
You have a global organization also servicing EU clients. Due to the GDPR, you want to prevent  your non-EU employees to access data from your EU clients.Based on role and location, dynamic data masking accommodates data security and privacy policies that vary based on users’ locations. Also data encryption can be setup to facilitate this. Data Masking
Data Encryption
Your have a brilliant team of data scientists on board. They love to crunch all your Big Data and come up with the best analysis. In order to do that, they need all the data you possibly have. A data lake also needs to be in line with what the GDPR specifies. Depending on the usage you may need to implement anonymization or pseudonymization techniques.Data Masking

 

Is Pseudenomization the golden GDPR bullet?

Pseudonomization or anonymization can be one aspect of a good GDPR approach. However, it is definitely not the complete answer and you also will need to look into a number of other important elements:

  • Key to the GDPR is consent and the linked purpose dimension. In order to manage the complete consent state you need to make sure that this information is available to all your data consumers and automatically applied. You can use consent mastering techniques such as master data management and data virtualization for this purpose.



  • Data Discovery & Classification

    The GDPR is all about protecting personal data. Do you know where all you PII type of data is located?  Data discovery will automatically locate and classify sensitive data and calculate risk/breach cost based on defined policies.


    Data Discovery & Classification

  • Data Register

    A data register is also a key GDPR requirement. You are expected to maintain a record of processing activities under your responsibility or with other words you must keep an inventory of all personal data processed. The minimum information goes beyond knowing what data an organization processes. Also included should be for example the purposes of the processing, whether or not the personal data is exported and all third parties receiving the data.

    A data register that is integrated in your overall data governance program and that is linked with the reality of your data landscape is the recommended way forward.




° Financieele Dagblad

Also in need for data masking or encryption?

Would you like to know how Datalumen can also enable you to use your data assets in line with the GDPR?

Contact us and start our Data Conversation.

The holiday season is the most important sales moment of the year. Nevertheless, a Zetes study reveals that retailers miss about 35 percent sales due to products not being immediately available or the lack of product information.

A quarter of consumers leave a store without actually buying anything if they do not immediately see the product that they are looking for or if that is not immediately available. It is one of the conclusions of a market research conducted by supply chain expert Zetes. The study specifically focused on buyer behavior during the annual peak period between November and January and analyzed both physical and online retail. 120 European retailers and over 2000 consumers were interviewed for this study in the January 2018 timeframe.

Time is money

The study states that stores miss 35 percent of sales due to the unavailability of products. The main reason for this is the expectation of the customer: more than in the past, customers will simply leave a store if they do not immediately see a product and they do not bother to talk to a shop assistant. 

If customers do however approach a shop assistant, they expect to receive more information about the product availability within two minutes. A rather limited window of opportunity especially if you know that the study also calculated that 51 percent of shop employees need to go to a cash desk to obtain the necessary information, and that 47 percent also needs to check the warehouse to verify the availability of a product. Both actions cost time and time is very expensive during the peak period. The study also reveals that 62 percent of retailers do not have access to real-time product data.  

Return Management

Also deliveries and returns typically cause extra problems during these busy months. It is common knowledge that people tend to buy quicker when they are sure that they can possibly return a product. The processing of returned parcels also causes problems. 26 percent of retailers indicate that they are having problems during the peak, so that only 39 percent of the returned goods are available for sale within 48 hours.  

Conclusion

“A lack of visibility of data is the core of these sales problems during the holidays,” the report states. “Consumers want choices, and they want to be informed. Instead of a general “not available” message, a retailer has a much greater chance of securing sales by telling the customer that a product will soon be back in stock and delivered within three days or will be available for click & collect.” There is still a significant room for information management improvement with direct sales optimization as a result. 

More information
https://www.zetes.com/en/white-papers

Also in need for real-time product information?

Would you like to know how Datalumen can also enable you to get real-time product information?

Contact us and start our Data Conversation.



 

Getting a good understanding of the requirements but also the opportunities and business value is not easy. We designed a GDPR business value roadmap to help you with this and also make you understand what capabilities you need to get the job done.  


1
2
3
4
1

  • How will you understand what in-scope data is used for, for what purpose and by whom?
  • How will you demonstrate how you’re aligning to the principles?
  • Is your approach mostly manual, using interviews, questionnaires & static documentation?
  • Is your approach inaccurate, time consuming, resource consuming, out-of-date –or all of the these?


2

  • Do you understand where in-scope data is across your organisation and how it is shared?
  • How will you demonstrate you understand the size & shape of the data problem across domains and data subjects?
  • Is your approach mostly manual, using interviews, questionnaires & static documentation?
  • Is this approach inaccurate, time consuming, resource consuming, out-of-date –or all of the these?

3

  • How will you capture, manage and distribute consents across channels and business units?
  • How will you demonstrate you have captured the lawfulness of processing across all in-scope data sources?
  • Do you have anything in place already? Or are you planning on extending existing preferences capabilities?

4

  • How will you put protections and controls around identified in-scope data?
  • Can you demonstrate you have relevant control over the relevant in-scope data?
  • Are you planning to manually apply controls? Or apply masking, deletion & archiving solutions as required?
  • Will this approach give you a holistic view around the protections & controls you have in place?





Complete the form and download this Datalumen infogram (A3 PDF).



The Datalumen privacy policy can be consulted here.

More info on our Advisory Services?

Would you like to know what Datalumen can also mean to your GDPR or other data governance initiatives?

Have a look at our GDPR or Data Governance
contact us and start our Data Conversation.



In 2013 cyber insurance was still a brand new product on the insurance market. At the time, only a negligible minority considered this policy to be useful. In the meantime, the number of online processes in the business world has steadily increased and the risks are no longer under discussion. Furthermore, Europe is placing cyber security high on the agenda with its new privacy legislation (GDPR).

Most companies already know that the GDPR requires a multi-facet initiative. Approaching data privacy risk from a legal, process and data point of view is fundamental. Cyber insurance can be an extra component in this approach and can be the missing link that gives companies the extra guarantee to cover its end to end privacy risk.

We interviewed Tom Van Britsom, cyber insurance expert at Vanbreda Risk & Benefits, to give you insight in the Cyber Insurance state of business. Vanbreda Risk & Benifits is a well known independent insurance broker and risk consultant.

Tom Van Britsom, Vanbreda Risk & Benefits

The business world and cyber criminals have both changed. Can you explain?

The increased importance of cyber insurance is a direct consequence of a metamorphosis that has unfolded in two areas over the past few years. First of all, the business world has become largely digitized. Major steps have been taken not only in production processes, but in terms of invoicing and finance. The B2C market has become highly digitized too, with virtually everything now being able to be ordered online.

Second, cyber criminals themselves have become much more professional. In the past, individuals represented the greatest threat in this area. They explored the boundaries of what was possible and tried to corner companies. This initial form of cyber crime has now given way to a more professional form which defies belief. For example, today there are gangs that employ an entire army of hackers and an accompanying call center to hold companies to ransom with maximum speed and efficiency.

How is a cyber policy tailored to this new reality?

Cyber insurance covers damage incurred by a company following a cyber incident. This can be caused by exposure to malware, viruses or hackers, as well as human error by an employee. The consequences are often severe: from loss of income due to interrupted operations, overtime logged by IT staff and the deployment of other professionals to sizeable claims from customers or suppliers affected by the data leak.

Today, cyber insurance is a comprehensive policy which – spurred on by the insurance industry – has adapted to the new context. Initially, there were two separate policies: one covered the insured party’s liability – from fines and notification fees to claims from companies that incurred damage as a result of a data breach or a virus via the insured party’s servers. A second policy was designed to cover personal damage incurred by the insured party, e.g. after operations were interrupted.

Now, however, both elements are combined into a single cyber insurance policy.

In recent years, the policy has been further expanded with new coverage, including cover against cyber theft and telephone hacking. The triggers of this policy have also become broader. Cyber insurance as it stands now can cover the financial consequences resulting from a security breach, human error or natural causes such as lightning.

Furthermore, many extra services have been added to this policy. Insured parties can now turn to helplines for legal assistance, crisis management and IT and PR support. Free scans are also offered that provide insight into a company’s vulnerability to cyber attacks and hackers.

The number of policies is obviously increasing exponentially – But what about the damage incurred?

In 2016, cyber insurance made its definitive breakthrough. Our experts at Vanbreda i.e. noticed that in 2017, the number of cyber policies taken out doubled in comparison to the year before.

The new European privacy regulation (GDPR) clearly creates an incentive for this, as there are substantial fines for those companies that do not comply. Today, administrative fines – along with all costs associated with the obligation of notification – can be insured in a cyber policy.

Unfortunately, many companies have recently been confronted by (attempted) cyber crime. This has also served as a wake-up call.

Vanbreda’s damage figures, and those of a few major cyber insurers, do not lie: one in thirteen of those insured have submitted a claim in the past five years. Our own figures (see graph below) show that 43% of the cases involved CryptoLockers. A data breach was the cause of just 5% of the claims, although that number will undoubtedly increase in 2018. From May 2018 onwards, an obligation of notification will apply for data leaks under the GDPR legislation.

There are two damage categories. One involves CryptoLockers. Although they are now quite common, the damage is fortunately limited to up to EUR 10,000. The other form of damage is increasing all the time, with instances of cyber theft where one million euros disappears or operations are interrupted for a period of days or weeks. The financial impact of this is huge. In Europe, there have been several well-known examples of cyber damage leading to millions of euros being lost.



What does the cyber insurance future hold?

The previous graph, with data from 2017, will almost certainly look completely different within a few years.

Due to the obligation of notification for data leaks, this type of damage will join the top three. In addition, Europe will impose fines amounting to up to 4% of global turnover in the event of data leaks following non-compliance with the GDPR regulation. This will also become evident in the amount of damages paid out.

It is clear that legislation is tightening and ignorance will no longer be accepted. Neither the government nor the business world is in any doubt of the current risks. In short, the usefulness of cyber policies is no longer under discussion.

 


More info on our Advisory Services?

Would you like to know what Datalumen can also mean to your GDPR or other data governance initiatives?

Have a look at our GDPR or Data Governance
contact us and start our Data Conversation.


By 2021, the CDO Role Will Be the Most Gender Diverse of All Technology-Affiliated C-level Positions.

As the role of chief data officer (CDO) continues to gain traction within organizations, a recent survey by Gartner, Inc. found that these data and analytics leaders are proving to be a linchpin of digital business transformation. 

The third annual Gartner Chief Data Officer survey was conducted July through September 2017 with 287 CDOs, chief analytics officers and other high-level data and analytics leaders from across the world. Respondents were required to have the title of CDO, chief analytics officer or be a senior leader with responsibility for leading data and/or analytics in their organization. 

“While the early crop of CDOs was focused on data governance, data quality and regulatory drivers, today’s CDOs are now also delivering tangible business value, and enabling a data-driven culture,” said Valerie Logan, research director at Gartner. “Aligned with this shift in focus, the survey also showed that for the first time, more than half of CDOs now report directly to a top business leader such as the CEO, COO, CFO, president/owner or board/shareholders. By 2021, the office of the CDO will be seen as a mission-critical function comparable to IT, business operations, HR and finance in 75 percent of large enterprises.” 

The survey found that support for the CDO role and business function is rising globally. A majority of survey respondents reported holding the formal title of CDO, revealing a steady increase over 2016 (57 percent in 2017 compared with 50 percent in 2016). Those organizations implementing an Office of the CDO also rose since last year, with 47 percent reporting an Office of the CDO implemented (either formally or informally) in 2017, compared with 23 percent fully implemented in 2016. 

“The steady maturation of the office of the CDO underlines the acceptance and broader understanding of the role and recognizes the impact and value CDOs worldwide are providing,” said Michael Moran, research director at Gartner. “The addition of new talent for increasing responsibilities, growing budgets and increasing positive engagement across the C-suite illustrate how central the role of CDO is becoming to more and more organizations.” 

Budgets are also on the rise. Respondents to the 2017 survey report an average CDO office budget of $8 million, representing a 23 percent increase from the average of $6.5 million reported in 2016. Fifteen percent of respondents report budgets more than $20 million, contrasting with 7 percent last year. A further indicator of maturity is the size of the office of the CDO organization. Last year’s study reported total full time employees at an average of 38 (not distinguishing between direct and indirect reporting), while this year reports an average of 54 direct and indirect employees, representing the federated nature of the office of the CDO design. 

Gartner CDO Survey Results

Key Findings

CDO shift from defense to offense to drive digital transformation

With more than one-third of respondents saying “increase revenue” is a top three measure of success, the survey findings show a clear bias developing in favor of value creation over risk mitigation as the key measure of success for a CDO. The survey also looked at how CDOs allocate their time. On a mean basis, 45 percent of the CDO’s time is allocated to value creation and/or revenue generation, 28 percent to cost savings and efficiency, and 27 percent to risk mitigation. 

“CDOs and any data and analytics leader must take responsibility to put data governance and analytics principles on the digital agenda. They have the right and obligation to do it,” said Mario Faria, managing vice president at Gartner. 

CDO are responsible for more than just data governance

According to the survey, in 2017, CDOs are not just focused on data as the title may imply. Their responsibilities span data management, analytics, data science, ethics and digital transformation. A larger than expected percentage of respondents (36 percent) also report responsibility for profit and loss (P&L) ownership. “This increased level of reported responsibility by CDOs reflects the growing importance and pervasive nature of data and analytics across organizations, and the maturity of the CDO role and function,” said Ms. Logan. 

In the 2017 survey, 86 percent of respondents ranked “defining data and analytics strategy for the organization” as their top responsibility, up from 64 percent in 2016. This reflects a need for creating or modernizing data and analytics strategies within an increasing dependence on data and insights within a digital business context. 

CDO are becoming impactful change agents leading the data-driven transformation

The survey results provided insight into the kind of activities CDOs are taking on in order to drive change in their organizations. Several areas seem to have a notable increase in CDO responsibilities compared with last year:

  • Serving as a digital advisor: 71 percent of respondents are acting as a thought leader on emerging digital models, and helping to create the digital business vision for the enterprise.
  • Providing an external pulse and liaison: 60 percent of respondents are assessing external opportunities and threats as input to business strategy, and 75 percent of respondents are building and maintaining external relationships across the organization’s ecosystem.
  • Exploiting data for competitive edge: 77 percent of respondents are developing new data and analytics solutions to compete in new ways.

CDO are diverse and tackling a wide array of internal challenges

Gartner predicts that by 2021, the CDO role will be the most gender diverse of all technology-affiliated C-level positions and the survey results reflect that position. Of the respondents to Gartner’s 2017 CDO survey who provided their gender, 19 percent were female and this proportion is even higher within large organizations — 25 percent in organizations with worldwide revenue of more than $1 billion. This contrasts with 13 percent of CIOs who are women, per the 2018 Gartner CIO Agenda Survey. When it comes to average age of CDOs, 29 percent of respondents said they were 40 or younger. 

The survey respondents reported that there is no shortage of internal roadblocks challenging CDOs. The top internal roadblock to the success of the Office of the CDO is “culture challenges to accept change” — a top three challenge for 40 percent of respondents in 2017. A new roadblock, “poor data literacy,” debuted as the second biggest challenge (35 percent), suggesting that a top CDO priority is ensuring commonality of shared language and fluency with data, analytics and business outcomes across a wide range of organizational roles. When asked about engagement with other C-level executives, respondents ranked the relationship with the CIO and CTO as the strongest, followed by a broad, healthy degree of positive engagement across the C-Suite. 


More info on our Advisory Services?

Would you like to know what Datalumen can mean to your CDO Office?

Have a look at our Services Offering
contact us and start our Data Conversation.


Datalumen - Total Data Management with Big Data - Infographic
The buzz about “big data” is here for a couple of years now.  Have we witnessed incredible results? Yes. But maybe they aren’t as impressive as previously believed they would be. When it comes down to Big Data, we’re actually talking about data integration, data governance and data security. The bottom line? Data needs to be properly managed, whatever its size and type of content. Hence, total data management approaches as master data management are gaining momentum and are the way forward when it comes down to tackling an enterprise’s Big Data problem.

Download the Total Data Management in Big Data infographic (PDF).

Data Integration:
Your First Big Data Stepstone

In order to make Big Data work you need to address data complexity in the context of the golden V’s: Volume, Velocity and Variety. Accessing, ingesting, processing and deploying your data doesn’t automatically happen and traditional data approaches based on manual processes simply don’t work. The reason why these typically fails is you because:

  • you need to be able to ingest data at any speed
  • you need to process data in a flexible, read scalable and efficient, but also repetitive way
  • and last but not least you need to be able to deliver data anywhere and with the dynamics of the ever changing big data landscape in mind, this is definitely a challenge

Data Governance:
Your Second Big Data Stepstone

A substantial amount of people believe that Big Data is the golden grail and consider it as a magical black box solution. They believe that you can just get whatever data in your Big Data environment and it miraculously is going result into useful information. Reality is somehow different. In order to get value out of your initative, you also need to actually govern your Big Data. You need to govern it in two ways:

Your Big Data environment is not a trash bin.

Key for success is that you are able to cleanse, enrich and standardize your Big Data. You need to prove the added value of your Big Data initiative so don’t forget your consumers and make sure you are able to generate and share trusted insights. According to Experian’s 2015 Data Quality Benchmark Report, organizations suspect 26% of their data to be inaccurate. Reality is that with Big Data this % can be even be 2 to 3 times worse.

 

Your Big Data is not an island.

Governing your Big Data is one element but in order to get value out of it you should be able to combine it with the rest of your data landscape. According to Gartner, through 2017, 90% of the information assets from big data analytic efforts will be siloed and unleverageable across multiple business processes. That’s a pity given that using Master Data Management techniques you can break the Big Data walls down and create that 360° view on your customer, product, asset or virtually any other data domain.

Data Protection:
Your Third Big Data Stepstone

With the typical Big Data volumes but also growth in mind, many organizations have limited to no visibility into the location and use of their sensitive data. However new laws and regulations like GDPR do require a correct understanding of the data risks based on number of elements like data location, proliferation, protection and usage. This obviously applies to traditional data but is definitely also needed for Big Data. Especially if you know that a substantial amount of organizations tend to use their Big Data environment as a black hole, the risk of having also unknown sensitive Big Data is real.

How do you approach this:

Classify

Classify your sensitive data. In a nutshell, data inventory, topology, business process and data flow mapping and operations mapping.

De-identify

De-identifies your data so it can be used wherever you need it. Think about reporting and analysis environments, think about testing, etc. For this purpose masking and anonymization techniques and software can be used.

Protect

Once you know where your sensitive data is located you can actually protect it through tokenization and encryption techniques. These techniques are required if you want to keep and use your sensitive data in the original format.



More info on Big Data Management?

Would you like to know what
Big Data Management can also mean for your organization?
Have a look at our Big Data Management section 
and contact us.


 

The new digital environment as well as a tough regulatory climate force the financial industry to adapt its business model in order to meet the demands of investors, regulators and customers. Today we mainly want to address the aspects of customer experience that traditional bankers ought to reflect on copying – or even exceeding. Because actually, it is customer experience that could be the traditional bank’s biggest asset. By this we mean that traditional banks are a one-stop shop for a broad range of financial products and services. This could serve both as an advantage as well as a competitive weakness to FinTech startups. Many traditional banks are still organized into silos. With business lines for individual products and services that use separate information systems and do not communicate to one another.

To improve on the customer experience, banks must be able to analyze customer information (data) and make that data useful for both the business and the customers. This is basically what Fintech does. However, they first need to gather the data. Traditional banks with a good data governance program, already have those data. They should have an advantage and leverage that.

To counter the extreme effectiveness and customer experience brought by new Fintech startups, some financial institutions are already upping their tech game. They work on the improvement of the user experience, they provide more insightful data analysis and increase cybersecurity.

While these are all true and important for banks, we believe getting “insightful data” is a little underestimated. There’s no data insights without clean data. There’s no clean data without a strong governance.

Data governance is all about processes that make sure that data are formally managed throughout the entire enterprise. Data governance is the way to ensure that data are correct and trustworthy. Data governance also turn employees accountable for anything bad occuring to the company resulting from a lack of data quality.

The role of data governance in the bank of the future?

The bank of the future is tech- and data-driven. Today’s digital capabilities turn the customer journey into a personalized experience. The bank of the future is predictive, proactive and understand the customers’ needs. It’s some sort of “Google Now for Banking”, suggesting actions proactively.  The bank of the future is a bank for individuals, it’s personalized in the range of services and products it offers to the individual – based on in-depth knowledge and understanding of the customer. By having up-to-date and correct data, you can truly serve customers.The “Bank of the Future” positions itself as ‘the bank that makes you the banker’. It thrives on interaction and a deep knowledge of its customers through data mining.

As the existing banking model is unbundled, everything about our financial services experience will change. In five to ten years, the industry will look fundamentally different. There will be a host of new providers and innovative new services. Some banks will take digital transformation seriously, others will buy their way into the future by taking over challengers and some will lose out. Some segments will be almost universally controlled by non-banks; other segments will be better within the structural advantages of a bank. Across the board, consumers will benefit as players will compete on innovation and customer experience. This is only possible with solid multi-domain, cross-silo data management with a solid data governance program on top of it.