Negotiations to acquire data management software company Informatica fell through after Salesforce, a business software giant, and Informatica couldn’t reach an agreement on terms. Discussions between the two companies were reportedly well underway in April, and a successful deal would have been one of Salesforce’s largest acquisitions.

A Missed Opportunity or a Blessing?

Was this a missed opportunity, or could it be a blessing in disguise for both companies and their customers? Let’s explore some potential reasons why the failed acquisition might not be all bad:


One concern with large acquisitions is vendor lock-in. If Salesforce had acquired Informatica, some Informatica customers might have felt pressured to adopt Salesforce’s entire suite of products, even if they weren’t the best fit for their data governance, data quality, and data catalog needs. Informatica, remaining independent, can continue to focus on providing data management solutions that can integrate with various platforms, giving customers more flexibility. However, it’s important to note that Salesforce customers would likely also face pressure to adopt the Informatica platform if the acquisition had gone through, potentially limiting their choice among the strong alternatives in the data management market. See the latest Forrester ‘The Seven Providers That Matter Most And How They Stack Up‘ report. 

Focus & Innovation

Large acquisitions can sometimes lead to a loss of focus for both M&A parties. With the Informatica deal off the table, both Salesforce and Informatica can concentrate their resources on core business software development and continue to innovate in their own respective spaces.

Conflicting Product Portfolio – Informatica vs Mulesoft

Salesforce already owns Mulesoft, another integration platform. There might have been overlap in functionalities between Informatica and Mulesoft, leading to product rationalization and confusion regarding future product roadmaps for both platforms. Confusion around future product roadmaps would create uncertainty for customers. They might not know which platform to invest in or how long their current platform (Informatica or Mulesoft) would be supported. This uncertainty could lead to a higher risk of rework or reinvestment as customers adapt to changes or migrate to a different platform.

Market Preference – Best-of-Breed vs All-in-One-Platform

Nowadays the majority of businesses prefer a “best-of-breed” approach, using the best tools from different vendors for specific tasks. An Informatica acquisition could have pushed Salesforce more towards an “all-in-one” platform strategy, which might not resonate with all customers who favor a more flexible approach. The simplicity of an all-in-one-platform or best-of-suite solution is appealing – fewer tools to manage and potentially lower costs with a single vendor. But real-world experience often reveals hidden drawbacks.


Overall, the failed Salesforce-Informatica deal allows both companies to remain their focus and better cater to their customer preferences in a competitive market that offers a variety of data management solutions. 



Need expert support with your data platform approach? Discover how Datalumen can help you. 



    Data’s effectiveness hinges on its quality and here’s where Augmented Data Quality (ADQ) steps in, revolutionizing how we ensure our information assets are accurate, reliable, and ready to use.

    Traditional Data Quality: A Manual Marathon

    For years, data quality relied on automated but nevertheless manual processes. Data stewards meticulously combed through datasets, identifying and correcting errors like inconsistencies, missing values, and formatting issues. This painstaking approach, while crucial, becomes increasingly inefficient as data volumes explode.

    Augmented Data Quality: AI-Powered Efficiency

    Augmented Data Quality tackles this challenge head-on by leveraging artificial intelligence (AI) and machine learning (ML). These powerful tools automate data quality tasks, freeing up human experts for more strategic endeavors.

    Here’s how ADQ makes a difference:

    • Automated anomaly detection: AI algorithms can scan huge datasets, pinpointing anomalies and potential errors that might escape manual analysis.
    • Intelligent data cleansing: ADQ can suggest corrections for identified issues, streamlining the cleaning process. Machine learning even allows the system to “learn” from past corrections, continuously improving its accuracy.
    • Proactive monitoring: ADQ can be configured for real-time monitoring, enabling early detection and rectification of data quality issues before they impact downstream processes.

    Benefits Beyond Efficiency

    The advantages of ADQ extend far beyond simply saving time and resources. Here’s what organizations can expect:

    • Enhanced data trust: ADQ fosters a culture of data trust within an organization. With a high degree of confidence in data quality, employees across departments can make informed decisions based on reliable information.
    • Improved decision-making: Clean, accurate data leads to better insights. ADQ empowers businesses to leverage data for strategic planning, risk management, and optimized operations.
    • Reduced costs: Data quality issues can lead to costly rework and missed opportunities. ADQ proactively addresses these challenges, minimizing associated costs.


    ADQ represents a significant step forward in data management. By harnessing the power of AI and automation, organizations can unlock the full potential of their data assets. As data continues to be the cornerstone of success, ADQ will be a critical differentiator for businesses that prioritize reliable information and data-driven decision making.


    In need for support with your Data Quality initiatives? Discover how Datalumen can help you getting there. 



    In the age of data-driven decision making, organizations face the challenge of determining when data quality is sufficient for their needs. Striking the right balance between investing resources in improving data quality and achieving an acceptable level of accuracy and reliability is crucial. In this article, we give you some step-by-step handles to help organizations assess and establish the appropriate data quality level.

    Step 1: Define Data Quality Requirements

    The first step in determining the right data quality level is to define clear and specific requirements. Take the time to understand your organization’s goals, objectives, and the decisions that will be based on the data. Identify the key dimensions of data quality that matter most to your organization, such as accuracy, completeness, consistency, timeliness, and relevancy. Defining these requirements will serve as a guide for assessing data quality.

    Step 2: Evaluate Data Use Cases

    Next, evaluate the different use cases and scenarios where data will be utilized. Each use case may have varying requirements and tolerance levels for data quality. Analyze the potential impact of data errors or inaccuracies on the decisions made in each specific use case. This evaluation will help prioritize the allocation of resources and efforts towards improving data quality where it matters the most.

    Step 3: Assess Data Collection and Processing Methods

    Evaluate the data collection and processing methods employed by your organization. Examine the data sources, collection processes, and data transformation steps. Identify potential bottlenecks, vulnerabilities, and areas where errors or inaccuracies could be introduced. Streamline the data collection process and implement quality checks at each step to ensure the integrity and reliability of the data.

    Step 4: Implement Data Quality Controls

    To ensure data quality is at an acceptable level, implement data quality controls throughout the data lifecycle. This includes setting up validation rules, data cleansing routines, and data profiling techniques. Establish automated checks to identify and rectify data anomalies, outliers, and inconsistencies. Leverage technology and tools to automate these processes and minimize human errors.

    Step 5: Measure Data Quality

    Establish data quality metrics that align with your defined requirements. These metrics may include error rates, completeness percentages, timeliness measures, or any other specific indicators relevant to your organization. Implement mechanisms to measure and monitor data quality regularly. Leverage statistical analysis, data profiling, and data visualization techniques to gain insights into the overall quality level and identify areas for improvement.

    Step 6: Set Tolerance Levels

    Define tolerance levels for data quality based on the specific use cases and requirements of your organization. Determine the acceptable margin of error for each use case. Consider factors such as the criticality of the decision being made, the potential impact of data errors, and the costs associated with improving data quality. Establishing tolerance levels will help determine when data quality is good enough to support the decision-making process effectively.

    Step 7: Continuous Improvement

    Data quality is an ongoing process that requires continuous monitoring and improvement. Regularly review the established metrics and tolerance levels. Evaluate feedback from data consumers and stakeholders to identify areas for enhancement. Invest in training and education programs to improve data literacy within the organization. By fostering a culture of continuous improvement, you can ensure that data quality is consistently enhanced over time.


    Determining the right data quality level is a balancing act for organizations seeking to optimize resources while maintaining reliable insights. By following a structured methodology, including defining data quality requirements, evaluating use cases, assessing data collection and processing methods, implementing data quality controls, measuring data quality, setting tolerance levels, and embracing continuous improvement, organizations can strike the right balance. Achieving the right data quality level will provide confidence in the decision-making process, leading to better business outcomes and a competitive advantage in the data-driven landscape.



    Data quality management is a critical component in the successful realization of your data strategy, and there are several hot topics that are currently gaining traction in this area. Here are some of the latest trends in data quality & data observability:

    • DQaaS or DQSaaS
    • Hybrid Usage & Island Hopping.
    • Machine Learning has hit the DQ space.
    • From patchwork to a fundamental capability.
    • From a technology to a business driven framework.

    TREND #1. DQaaS or DQSaaS

    Data quality as a service (DQaaS or DQSaaS) is an emerging trend that involves outsourcing a subset of data quality management functionality to third-party cloud application providers. DQaaS providers offer tools and services to monitor and improve data quality, reducing the workload on in-house data teams. In general, SaaS is provided in a cloud-based or hosted model.

    We see two types of DQaaS:
    • One type basically providing a complete set of data quality functionalities (equivalent to traditional on-premise offerings) that run on cloud platforms. Clients typically order them on demand from cloud enabled vendors and often use them on a subscription basis (1 year or longer).
    • The other type of DQaaS is basically based on on-demand online services to i.e. validate and verify addresses or other relevant data assets. These micro data quality services are typically used on a pay per usage / service-call basis.

    TREND #2. Hybrid Usage & Island Hopping

    Until recently, Data Quality was primarily applied in an operational/transactional context. DQ in an analytics context was also already implemented in quiet some organization, but what is relative new is the enhanced DQ usage in a number of other data related initiatives such as MDM, DG, Data Engineering and AI/ML. What we also see as part of that move, is a shift from an island DQ usage tailored towards one specific initiative to an organization-wide DQ usage. The organization-wide DQ approach provides a number of benefits ranging from more consistent data quality up to enhanced collaboration, more efficient data processing and reuse.

    TREND #3. Machine Learning has hit the DQ space

    We don’t need to explain you that AI/ML is hot and all over the place. The AI/ML examples that you read about might not always seem to be relevant to you. However, DQ is one of those areas where AI/ML can really deliver substantial added value. Machine learning algorithms can be used to automatically identify data quality issues and correct them. For example, machine learning models can be trained to detect duplicates, correct spelling errors, and identify missing data. Next to more automated error resolution a lot of DQ applications are expanding DQ to provide insights by discovering relationships, patterns and trends.

    We see both custombuild applications (Python and other open source libraries) and Data Quality platforms coming with embedded AI/ML functionality to bring DQ automation to the next level.

    TREND #4. From patchwork to a fundamental & companywide capability

    As Data Governance (DG) maturity in organizations grows, we also see that they address Data Quality as a fundamental integrated data capability. Instead of the so called one-off usage for one specific case (data migration, CRM, etc), a lot of companies make DQ a structural component that they can reuse continuously for all their data related initiatives. The benefit of this wider embedded approach is that it eases organizations to demonstrate DQ to be a profit vs a cost component.

    TREND #5. From a technology to a business driven framework

    As we mentioned in the previous trend and as maturity increases, we are also observing a shift in the approach from an it-driven towards a business-driven perspective. The main reason for this is that organizations require data quality to be seamlessly integrated into their business processes for optimal results.


    In conclusion, data quality management is a critical component of data management, and there are many exciting trends and technologies emerging in this area. From data profiling to machine learning, organizations have many tools and techniques available to improve data quality and drive business growth.


    What is the SAP Datasphere announcement all about?

    SAP has unveiled the SAP Datasphere solution, the latest iteration of its data management portfolio that simplifies customer access to business-ready data across their data landscape. In addition, SAP has formed strategic partnerships with leading data and AI companies such as Collibra, Confluent, Databricks and DataRobot to enrich the SAP Datasphere and help organizations develop a unified data architecture that securely combines SAP and non-SAP data.

    What is SAP Datasphere?

    SAP Datasphere is a comprehensive data service that delivers seamless and scalable access to mission-critical business data and is in essence the next generation of SAP Data Warehouse Cloud. SAP has kept all the capabilities of SAP Data Warehouse Cloud and added newly available data integration, data cataloging, and semantic modeling features, which we will continue to build on in the future. More info on the official SAP Datasphere solution page.

    Why does this matter to you?

    The announcement is significant because it eliminates the complexity associated with accessing and using data from disparate systems and locations, spanning cloud providers, data vendors, and on-premise systems. Customers have traditionally had to extract data from original sources and export it to a central location, losing critical business context along the way and needing dedicated IT projects and manual effort to recapture it. With SAP Datasphere, customers can create a business data fabric architecture that quickly delivers meaningful data with the business context and logic intact, thereby eliminating the hidden data tax.

    As a solution partner in this ecosystem, we are excited about the collaboration and the added value it provides:

    • With Collibra SAP customers can deliver an end-to-end view of a modern data stack across both SAP and non-SAP systems, enabling them to deliver accurate and trusted data for every use, every user, and across every source.
    • Confluent and SAP are working together to make it easier than ever to connect SAP software data to external data with Confluent in real-time to power meaningful customer experiences and business operations.
    • Databricks and SAP share a vision to simplify analytics and AI with a unified data lakehouse, enabling them to share data while preserving critical business context.
    • DataRobot and SAP’s joint customers can now also leverage machine learning models trained on their business data with speed and scale to see value faster, using the SAP Datasphere as the foundation layer.


    Also want to understand how you can take your SAP and non-SAP data to the next level? Would you like to find out how Datalumen can help?


    In the world of data management, there are many terms and concepts that can be confusing. Two such concepts are data observability and data curation. While both are important for ensuring data accuracy and reliability, they have distinct differences. In this article, we will explore the key differences between data observability and data curation.

    What is Data Observability?

    Data observability refers to the ability to monitor and understand the behavior of data in real-time. It is the process of tracking, collecting, and analyzing data to identify any anomalies or issues. Data observability is often used in the context of monitoring data pipelines, where it can be used to identify issues such as data loss, data corruption, or unexpected changes in data patterns.

    Data observability relies on metrics, logs, and other data sources to provide visibility into the behavior of data. By analyzing this data, it is possible to identify patterns and trends that can be used to optimize data pipelines and improve data quality.

    What is Data Curation?

    Data curation, on the other hand, refers to the process of managing and maintaining data over its entire lifecycle. It is the process of collecting, organizing, and managing data to ensure its accuracy, completeness, and reliability. Data curation involves tasks such as data cleaning, data validation, and data enrichment.

    Data curation is essential for ensuring that data is accurate and reliable. It involves the use of automated tools and manual processes to ensure that data is properly labeled, formatted, and stored. Data curation is particularly important for organizations that rely heavily on data analytics, as inaccurate or incomplete data can lead to faulty insights and poor decision-making.

    Key Differences Between Data Observability and Data Curation

    While data observability and data curation share some similarities, there are key differences between the two concepts. The main differences are as follows:

    • Focus: Data observability focuses on monitoring data in real-time, while data curation focuses on managing data over its entire lifecycle.

    • Purpose: Data observability is used to identify and troubleshoot issues in data pipelines, while data curation is used to ensure data accuracy and reliability.

    • Approach: Data observability relies on monitoring tools and real-time analysis, while data curation relies on automated tools and manual processes.


    In summary, data observability and data curation are two important concepts in the world of data management. While they share some similarities, they have distinct differences. Data observability is focused on real-time monitoring and troubleshooting, while data curation is focused on ensuring data accuracy and reliability over its entire lifecycle. Both concepts are important for ensuring that data is accurate, reliable, and useful for making informed decisions.


    Collibra has introduced a range of new innovations at the Data Citizens ’22 conference, aimed at making data intelligence easier and more accessible to users.

    Collibra Data Intelligence Cloud has introduced various advancements to improve search, collaboration, business process automation, and analytics capabilities. Additionally, it has also launched new products to provide data access governance and enhance data quality and observability in the cloud. Collibra Data Intelligence Cloud merges an enterprise-level data catalog, data lineage, adaptable governance, uninterrupted quality, and in-built data privacy to deliver a comprehensive solution.

    Let’s have a look at the new announced functionality:

    Simple and Rich Experience is the key message


    Frequently, teams face difficulty in locating dependable data for their use. With the introduction of the Collibra Data Marketplace, this task has become simpler and quicker than ever before. Teams can now access pre-selected and sanctioned data through this platform, enabling them to make informed decisions with greater confidence and reliability. By leveraging the capabilities of the Collibra metadata graph, the Data Marketplace facilitates the swift and effortless search, comprehension, and collaboration with data within the Collibra Data Catalog, akin to performing a speedy Google search.

    Usage analytics

    To encourage data literacy and encourage user engagement, it’s important to have a clear understanding of user behavior within any data intelligence platform. The Usage Analytics dashboard is a new feature that offers organizations real-time, useful insights into which domains, communities, and assets are being used most frequently by users, allowing teams to monitor adoption rates and take steps to optimize their data intelligence investments.


    Creating a user-friendly experience that allows users to quickly and easily find what they need is crucial. The revamped Collibra homepage offers a streamlined and personalized experience, featuring insights, links, widgets, and recommended datasets based on a user’s browsing history or popular items. This consistent and intuitive design ensures that users can navigate the platform seamlessly, providing a hassle-free experience every time they log into Collibra Data Intelligence Cloud.

    Workflow designer

    Data teams often find manual rules and processes to be challenging and prone to errors. Collibra Data Intelligence Cloud’s Workflow Designer, which is now in beta, addresses this issue by enabling teams to work together to develop and utilize new workflows to automate business processes. The Workflow Designer can be accessed within the Collibra Data Intelligence Cloud and now has a new App Model view, allowing users to quickly define, validate, and deploy a set of processes or forms to simplify tasks.


    Improved performance, scalability, and security

    Collibra Protect

    Collibra Protect is a solution that offers smart data controls, allowing organizations to efficiently identify, describe, and safeguard data across various cloud platforms. Collibra has collaborated with Snowflake, the Data Cloud company, to offer this new integration that enables data stewards to define and execute data protection policies without any coding in just a matter of minutes. By using Collibra Protect, organizations gain greater visibility into the usage of sensitive and protected data, and when paired with data classification, it helps them protect data and comply with regulations at scale.

    Data Quality & Observability in the Cloud

    Collibra’s latest version of Data Quality & Observability provides enhanced scalability, agility, and security to streamline data quality operations across multiple cloud platforms. With the flexibility to deploy this solution in any cloud environment, organizations can reduce their IT overhead, receive real-time updates, and easily adjust their scaling to align with business requirements.

    Data Quality Pushdown for Snowflake 

    The new feature of Data Quality Pushdown for Snowflake empowers organizations to execute data quality operations within Snowflake. With this offering, organizations can leverage the advantages of cloud-based data quality management without the added concern of egress charges and reliance on Spark compute.

    New Integrations

    Nowadays, almost 77% of organizations are integrating up to five diverse types of data in pipelines, and up to 10 different types of data storage or management technologies. Collibra is pleased to collaborate with top technology organizations worldwide to provide reliable data across a larger number of sources for all users. With new integrations currently in beta, mutual Collibra customers utilizing Snowflake, Azure Data Factory, and Google Cloud Storage can acquire complete visibility into cloud data assets from source to destination and offer trustworthy data to all users throughout the organization.


    Some of this functionality was announced as beta and is available to a number of existing customers for testing purposes.

    Want to Accelerate your Collibra time to value and increase adoption?

    Would you like to find out how Datalumen can help?  Contact us and start our data conversation.


    Customer & household profiling, personalization, journey analysis, segmentation, funnel analytics, acquisition & conversion metrics, predictive analytics & forecasting, …  The marketing goal to deliver a trustworthy and complete insight in the customer across different channels can be quiet difficult to accomplish.

    A substantial amount of marketing departments have chosen to rely on a mix of platforms going from CEM/CXM, CDP, CRM, eCommerce, Customer Service, Contact Center, Marketing Automation to Marketing Analytics. A lot of these platforms are best of breed and come from a diverse number of vendors who are leader in their specific market segment. Internal custom build solutions (Microsoft Excel, homebrew data environments, …) always complete this type of setup.

    78% According to a Forrester study, although 78% of marketers claim that a data-driven marketing strategy is crucial, as many as 70% of them admit they have poor quality and inconsistent data.

    The challenges

    Creating a 360° customer view across this diverse landscape is not a walk in the park. All of these marketing platforms do provide added value but are basically separate silos. All of these environments use different data and the data that they have in common, is typically used in a different way. If you need to join all these pieces together, you need some magical super glue.  Reality is that none of the marketing platform vendors actually have this in house.

    Another point of attention is your data scope. We don’t need to explain you that customer experience is the hot thing in marketing nowadays. However marketeers need to do much more than just analyze customer experience data in order to create real customer insight.

    Creating insight also requires that the data that you analyze goes beyond the traditional customer data domain. Combining customer data with i.e. the proper product/service, supplier, financial, … data is rather fundamental for this type of exercises. This type of extended data domains is usually lacking or the required detail level is not present in one particular platform.

    38% Recent research from  KPMG and Forrester Consulting shows that 38% of marketers claimed they have a high level of confidence in their data and analytics that drives their customer insights. That’s said, only a third of them seem to trust the analytics they generate from their business operations.

    The foundations

    Regardless of the mix of marketing platforms, many marketing leaders don’t succeed in taking full advantage of all their data. As a logical result they also fail to make a real impact with their data driven marketing initiatives. The underlying reason for this issue is that many marketing organizations lack a number of crucial data management building blocks that allow them to break out of these typical martech silos. The most important data capabilities that you should take into account are:




    Master Data Management (aka MDM)

    Creating a single view or so called golden record is the essence of Master Data Management. This allows you to make sure that a customer, product, etc is consistent across different applications.


    Business Glossary

    Having the correct terms & definitions might seem trivial but reality is that in the majority of the organizations noise on the line is reality. However having crystal clear terms and definitions is a basic requirement to have all stakeholders manage the data in the same way and prevent conflicts and waste down the data supply chain.


    Data Catalog

    Imagine Google-like functionality to search through your data assets. Find out what data you have, what’s the origin, how and where it is being used.


    Data Quality

    The why of proper data quality is obvious for any data consuming organization. If you have disconnected data landscape, data quality is even more important because it also facilitates the automatic match & merge glue exercise that you put in place to come to a common view on your data assets.


    Data Virtualization

    Getting real-time access to your data in an ad hoc and dynamic way is one of the missing pieces to get to your 360° view in time and budget. Forgot about traditional consumer headaches such as long waiting times, misunderstood requests, lack of agility, etc.



    We intentionally use the term capability because this isn’t a IT story. All of these capabilities have a people, process and technology aspect and all of them should be driven by the business stakeholders. IT and technology is facilitating.

    The results

    If you manage to put in place the described data management capabilities you basically get in control. Your organization can find, understand and make data useful. You improve the efficiency of your people and processes, and reduce your data compliance risks. The benefits in a nutshell:

    1. Get full visibility of your data landscape by making data available and easily accessible across your organization. Deliver trusted data with documented definitions and certified data assets, so users feel confident using the data. Take back control using an approach that delivers everything you need to ensure data is accurate, consistent, complete and discoverable.
    2. Increase efficiency of your people and processes. Improve data transparency by establishing one enterprise-wide repository of assets, so every user can easily understand and discover data relevant to them. Increase efficiency using workflows to automate processes, helping improve collaboration and speed of task completion. Quickly understand your data’s history with automated business and technical lineage that help you clearly see how data transforms and flows from system to system and source to report.
    3. Reduce data and compliance risks. Mitigate compliance risk setting up data policies to control data retention and usage that can be applied across the organization, helping you meet your data compliance requirements. Reduce data risk by building and maintaining a business glossary of approved terms and definitions, helping ensure clarity and consistency of data assets for all users.

    42% of data-driven marketers say their current technology in place is out of date and insufficient to help them do their jobs. Walker Sands Communications State of Marketing Technology report.


    The data you need to be successful with your marketing efforts is there. You just have to transform it into useable data so that you can get accurate insights to make better decisions. The key in all of this is getting rid of your marketing platform silos by making sure that you have the proper data foundations in place. The data foundations to speed up and extend the capabilities of your datadriven marketing initiatives.

    Need help unlocking your marketing data?

    Would you like to find out how Datalumen can also help you with your marketing & data initiatives?  Contact us and start our data conversation.