The Gartner Data & Analytics Summit 2025 in London attendees explored the evolving landscape of data, analytics, and artificial intelligence. The event highlighted that organizations must build AI initiatives on a foundation of robust data governance, strategic alignment, and a culture prepared for transformation.
Generative AI: From Hype to Strategic Imperative
Generative AI has evolved from experimental adoption to strategic integration. Gartner analysts emphasized that without high-quality, accessible data, AI projects will likely fail. Organizations need to eliminate data silos and ensure real-time data integration to properly fuel AI models. As noted at the summit, “If your data isn’t ready, your AI won’t be business-ready.”
Governance: The Trust Stack for AI
AI governance has transformed from a compliance requirement to a strategic enabler. The summit stressed the need for adaptive governance models ensuring AI systems are accurate, explainable, and aligned with business goals. This includes enhancing data quality controls, implementing explainability, and monitoring for bias and compliance risks. Gartner forecasts that by 2027, 60% of enterprises will fail to achieve expected value from AI initiatives due to inadequate governance.
Composable Data Architectures: Flexibility and Scalability
Open, composable data platforms were highlighted as crucial for avoiding vendor lock-in and integrating best-of-breed tools. These architectures enable seamless AI integration across multi-cloud and on-premises environments, allowing organizations to combine various AI models, databases, and analytics tools to meet evolving business requirements.
Upskilling: Building AI-Ready Teams
Integrating AI into business processes requires a workforce skilled in AI literacy. Organizations should train business leaders to interpret AI-generated insights, upskill data teams to manage AI-driven workflows, and create new roles focused on AI governance and ethics. Investing in AI education positions enterprises to maximize AI’s potential as the technology advances. Read also Establishing Robust Data Literacy – From Awareness to Action for a step-by-step plan to address data and, by extension, AI literacy.
Data Fabric and Data Mesh: Complementary Architectures
The summit revealed how Data Fabric and Data Mesh architectures complement each other. Data Fabric leverages metadata for automation, while Data Mesh decentralizes data delivery, treating data as a product. Combining these approaches creates scalable, flexible data architectures that improve efficiency and support business-driven data initiatives. Read also Data Fabric vs Data Mesh: An Apples & Oranges Story.
AI Governance as a Differentiator
Effective AI governance is becoming a competitive advantage. Organizations with comprehensive governance frameworks can boost productivity, drive competitive advantage, and enhance brand value through responsible AI implementation. Currently, only 5% of organizations have comprehensive governance for generative AI, presenting a significant opportunity for those prioritizing trust and compliance in their AI strategies. Read also AI & Data Governance: The Intersection You Can’t Miss to Make AI Responsible & Trustworthy.
Conclusion
The Gartner Data & Analytics Summit 2025 emphasized that successful AI adoption requires more than technology. It demands a holistic approach including data readiness, adaptive governance, flexible architectures, and skilled talent. Organizations embracing these principles will transform AI from a technological novelty into a strategic asset driving innovation and competitive advantage.
CONTACT US
Exploring AI & Datagovernance? Redesigning your Data Architecture? Datalumen provides expert support in organizing your data architecture and broader data agenda. Contact us to discover how we can help you succeed.
https://www.datalumen.eu/wp-content/uploads/2025/05/Datalumen_Gartner_Data_Analytics_Summit_2025LS.jpg12822448Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2025-05-14 18:25:052025-05-14 18:25:05UNLOCKING AI’S POTENTIAL: KEY TAKEAWAYS FROM THE GARTNER DATA & ANALYTICS SUMMIT 2025
In today’s data-driven world, organizations are increasingly recognizing the immense value hidden within their data. However, simply collecting data isn’t enough. To truly unlock its potential, businesses need a well-defined data architecture supported by robust data governance. This article explores the critical distinction between business data architecture and technical data architecture, the two pillars of data architecture, and how data governance serves as the bridge between them to deliver meaningful business outcomes.Â
Business Data Architecture: Laying the Foundation with Business Needs
Business data architecture serves as the strategic blueprint for your organization’s data from a business perspective. It addresses what data you need and why, connecting this data to business goals and processes. It focuses on the meaning and context, emphasizing business semantics rather than technical implementation. The primary audience includes business stakeholders such as business analysts, data owners, subject matter experts, and leaders who understand core business requirements and how data supports strategic objectives.
At its heart, business data architecture creates conceptual and logical data models that represent key business entities (customers, products, orders), their attributes, and relationships, all described in business terms. For instance, a business data architect might define “Customer” as an entity with attributes like “Customer Name,” “Contact Information,” and “Purchase History,” and establish relationships with entities like “Order” and “Product.”
Key Functions of Business Data Architecture
Business data architecture identifies and defines core entities, establishing a common organizational understanding of key data elements. It maps relationships between data elements, showing how different pieces connect from a business perspective. The architecture determines data quality requirements, establishing necessary levels of accuracy, completeness, and consistency for various business processes. It analyzes how data supports business decisions through reporting, analytics, and strategic planning. Furthermore, it defines ownership and governance policies, assigning responsibility for data accuracy and integrity while outlining rules for access and usage.
Deliverables of Business Data Architecture
The outputs of business data architecture include conceptual data models illustrating the main entities and relationships from a business perspective. More detailed logical data models define attributes, data types, and relationships in a technology-independent manner. Business glossaries and data dictionaries provide comprehensive terminology definitions, ensuring consistent language across the organization. High-level data flow diagrams show how information moves through key business processes, while data governance frameworks outline the policies, procedures, and responsibilities for data management.
Ultimately, business data architecture provides the “why” behind the data, ensuring alignment between data strategy and business strategy, so that collected and managed data truly serves organizational needs.
Technical Data Architecture: Bringing the Blueprint to Life
Technical data architecture deals with the practical implementation and management of data using specific technologies and systems. It translates the business blueprint into concrete plans for how data will be stored, processed, secured, and made accessible. The primary audience includes technical stakeholders such as data engineers, database administrators, system architects, and IT professionals responsible for designing, building, and maintaining the data infrastructure.
Key Functions of Technical Data Architecture
Technical data architecture involves selecting appropriate storage systems by choosing the right types of databases, warehouses, and storage technologies based on specific requirements and performance needs. It includes physical database design, creating schemas, tables, columns, indexes, and other objects optimized for efficiency. The architecture implements integration mechanisms, building ETL/ELT processes and data pipelines to move and transform data between systems. It develops security protocols with access controls, encryption methods, and protection measures against unauthorized access. Performance optimization ensures system responsiveness and efficiency, while data lineage tracking monitors how information flows through various systems.
Deliverables of Technical Data Architecture
The concrete outputs include physical data models and database schemas that define the actual implementation of data structures. Integration pipelines show how data moves between systems, while security architectures detail protection mechanisms. Data warehouse and lake designs provide blueprints for analytical environments, accompanied by performance optimization plans to ensure system efficiency. Together, these elements create the technical foundation that supports business data needs.
The Bridge: Data Governance as the Crucial Connector
The Critical Interplay Between Business and Technology
Business and technical data architecture must work in harmony for effective data management. Business architecture defines the “what” and “why” of data needs, while technical architecture determines the “how” of implementation. Imagine trying to build a house without an architect’s blueprint – the construction team wouldn’t know what to build or how the different parts should fit together. Similarly, a strong technical data architecture without a solid understanding of business needs risks building a system that doesn’t actually solve the right problems or deliver the required value.
Data Governance: The Framework for Success
Data Governance (DG) serves as the essential bridge between business and IT, ensuring that the data landscape is managed effectively to enable strategic execution. DG guarantees that business and technical architectures remain aligned through clear communication channels and shared understanding. It also ensures that data assets deliver measurable business value through proper management, quality control, and strategic utilization.
Key Principles for Effective Data Governance
Effective data governance focuses primarily on behavior change and communication improvement rather than simply deploying technological tools. Organizations should position data governance as a fundamental business function, similar to finance or compliance, with clear responsibilities and accountability. Communication about data governance should emphasize business outcomes such as return on investment and risk mitigation, rather than focusing solely on policies and procedures.
A critical aspect involves clearly separating yet connecting business data architecture and technical data architecture, acknowledging their distinct roles while ensuring they work together seamlessly. Data governance must facilitate ongoing collaboration between business and technical teams, creating forums for regular communication, joint problem-solving, and shared decision-making regarding data assets.
Conclusion: Creating a Cohesive Data Strategy
By recognizing the distinct roles of business and technical data architecture, and implementing a robust data governance framework to bridge them, organizations can build an effective data landscape that drives business value.
This comprehensive approach ensures that business needs drive technical implementation while technical capabilities inform business possibilities. Data governance provides the structure for sustainable success, guiding the organization’s data journey through changing business requirements and evolving technologies.
In the data-driven era, this integrated strategy is essential for organizations seeking to transform data from a resource into a true strategic asset. The clear delineation between business and technical data architecture, connected through thoughtful data governance practices, creates the foundation for data-driven decision making, operational excellence, and strategic advantage in an increasingly competitive landscape.
Â
CONTACT US
Is your data architecture ready for the future? Datalumen provides expert support in organizing your data architecture and broader data agenda. Contact us to discover how we can help you succeed.
https://www.datalumen.eu/wp-content/uploads/2025/04/Datalumen_Business_Technical_DataArchitecture_2-scaled.jpg14402560Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2025-04-11 09:04:182025-04-11 09:41:57DECODING YOUR DATA LANDSCAPE: UNDERSTANDING BUSINESS AND TECHNICAL DATA ARCHITECTURE FOR EFFECTIVE DATA GOVERNANCE
Let’s get real about a problem that keeps CDO, CAIO, CIO and basically any manager involved with data up at night: technical debt. Again, it’s not just another corporate buzzword – it’s the silent killer of efficiency, innovation, and organizational success. Just imagine your data ecosystem as a complex building. Technical debt is like constructing each floor with progressively worse materials and less attention to structural integrity.
Anatomy of Technical Debt: Where Does It Really Come From?
The Pressure Cooker of Modern Business
Picture this: Your team is racing against an impossible deadline. The CEO wants insights yesterday, stakeholders are breathing down your neck, and you’ve got limited resources. Something’s got to give – and that something is usually quality.
The landscape of technical debt is shaped by a perfect storm of challenges. Organizations often find themselves trapped in a cycle of quick wins and immediate solutions. The “quick win” trap is particularly insidious – delivering a solution that works now but will be a nightmare to maintain later. Resource constraints force teams to do more with less, cutting corners to meet immediate needs. Skill gaps emerge when organizations lack the right expertise to build robust, scalable solutions. And perhaps most challenging of all is the rapidly changing business landscape, where requirements shift faster than infrastructure can adapt.
The Ugly Manifestations of Technical Debt
Frankenstein-Pipelines: When Data Flows Become Data Disasters
Imagine a data pipeline that looks like it was assembled by a mad scientist. These Franken-pipelines might work, but they’re held together by hopes, prayers, and digital duct tape. They feature inconsistent data transformations, zero error handling, no clear documentation, and performance that degrades faster than a budget smartphone.
The Data Silo Syndrome
Organizations often become a collection of data kingdoms, with each department building their own data solutions. These information fortresses use different tools and standards, creating deep isolation that prevents holistic insights. It’s like having multiple teams speaking different languages, each convinced their dialect is the only true way to communicate.
The Documentation Black Hole
No documentation is like a company where everyone keeps their knowledge locked inside their heads. When a key team member leaves, they take an entire universe of understanding with them. It’s institutional amnesia in its purest form – leaving behind systems that become increasingly mysterious and incomprehensible.
The True Cost: Beyond Just Technical Challenges
Technical debt isn’t just a technical problem – it’s a full-blown business nightmare that can silently erode an organization’s capabilities and potential. When we talk about the real impact of technical debt, we’re not just discussing lines of code or system inefficiencies. We’re talking about a cascading effect that touches every aspect of a business.
From a financial perspective, the consequences are profound and far-reaching. Organizations find themselves trapped in a never-ending cycle of increased maintenance costs, where valuable resources are constantly diverted from innovation to simply keeping existing systems afloat. The time-to-market for new products and services becomes painfully slow, as teams are bogged down by complex, fragile systems that require constant firefighting.
But the true damage goes far beyond spreadsheets and financial projections. The human cost of technical debt is equally devastating. Team morale plummets as talented professionals find themselves constantly wrestling with poorly designed systems instead of doing meaningful, innovative work. Burnout becomes a very real and pressing concern, with skilled team members feeling trapped and frustrated by the technical quicksand they’re forced to navigate daily.
Strategies for Taming the Technical Debt Beast
Proactive Debt Management
Treating your data ecosystem like a financial portfolio requires regular audits and strategic thinking. Not all technical debt is created equal, so creating a prioritization matrix becomes crucial. Organizations must assess the impact versus the effort required to resolve each issue, developing a strategic remediation roadmap that balances immediate needs with long-term sustainability.
Cultural Transformation
Technical debt isn’t just a technical challenge – it’s a cultural one. This requires a fundamental shift in organizational mindset. Moving from “just get it done” to “get it done right” demands creating psychological safety for addressing systemic issues. It means rewarding long-term thinking over short-term gains and implementing continuous learning initiatives that empower teams to build better, more sustainable solutions.
The 90-Day Technical Debt Reset
Transforming your technical landscape doesn’t happen overnight, but a structured approach can create meaningful change. In the first month, conduct a comprehensive technical debt audit and create a prioritized remediation list. Secure leadership buy-in to ensure organizational support. The second month focuses on addressing high-impact, low-effort items while beginning to implement governance frameworks. By the third month, initiate major system refactoring, implement new data quality processes, and train teams on best practices.
Warning Signs: Is Your Organization Drowning in Technical Debt?
Watch for red flags like frequent system failures, increasing time to implement new features, growing complexity of simple tasks, high turnover in technical teams, and difficulty integrating new technologies. These are symptoms of a deeper systemic issue that requires immediate attention.
Conclusion: Your Technical Debt Transformation Journey
Technical debt is not a destination – it’s a continuous journey of improvement. It requires strategic thinking, cultural commitment, ongoing investment, patience, and persistence. Every line of code, every data pipeline, every system, every process change is an opportunity to build something better than what came before.
Pro Tip 1 – Remember: The best time to address technical debt was yesterday. The second-best time is right now.
Pro Tip 2 – Sustainable data management & data governance is not an expense – it’s an investment in your organization’s future.
CONTACT US
Need expert support organizing your data agenda? Reach out and discover how Datalumen has the expertise and experience to help you.Â
https://www.datalumen.eu/wp-content/uploads/2025/03/DataManagement_TechnicalDebt_Datalumen-scaled.jpg25302560Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2025-03-27 17:38:102025-03-28 20:18:21THE HIDDEN COST OF CUTTING CORNERS: UNDERSTANDING TECHNICAL DEBT IN DATA MANAGEMENT
In today’s data-driven world, implementing a data catalog is no longer a luxury but a necessity for organizations looking to truly leverage their data assets. While the allure of cutting-edge technology is strong, the success of your data catalog initiative hinges on a solid foundation of non-technical considerations. This guide explores what you, as a data leader, need to know to avoid common pitfalls and ensure a thriving data catalog.
Evaluating Metadata Management Requirements
Before diving into data catalog technology, take a step back and thoroughly understand your organization’s unique metadata management needs. This involves identifying the different types of metadata you need to capture and manage. Consider the following questions, along with concrete examples:
What are your data catalog’s primary use cases?
Data Discovery: Do users struggle to find the right data? If so, you’ll need rich descriptions, keywords, tags, and potentially data previews.
Data Governance: Are you subject to regulations like GDPR? This necessitates robust data lineage tracking to understand where sensitive data originates and how it’s used.
Data Quality: Do you need to monitor and improve data accuracy? You might need to capture metadata about data quality rules, validation processes, and error rates.
Data Understanding & Context: Do business users lack context about technical datasets? You’ll need business glossaries, data dictionaries, and the ability to link technical metadata to business terms.
What types of metadata do you need to manage?
Technical Metadata: This includes information about the structure of your data, such as table names, column names, data types, and schemas.
Business Metadata: This provides context and meaning to the data, including business definitions, ownership information, data sensitivity levels, and relevant business processes.
Operational Metadata: This relates to the processing and movement of data, such as data lineage (where data comes from and where it goes), data transformation history, and job execution logs.
What are the key performance indicators (KPIs) for your data catalog?
Time to Find Data: How much time do data analysts currently spend searching for data? Aim to reduce this significantly.
Data Quality Scores: Track improvements in data quality metrics after the catalog implementation.
Adoption Rate: How many users are actively using the data catalog?
Compliance Adherence: Measure how the data catalog helps in meeting regulatory requirements.
By thoughtfully addressing these questions, you’ll lay a strong foundation for choosing the right data catalog technology and ensuring its successful adoption within your organization.
Assessing The Readiness of Your Organizations
Implementing a data catalog requires a significant amount of planning, resources, and organizational buy-in. As a data and analytics leader, you should assess your organization’s readiness for a data catalog implementation by considering the following:
Do you have a clear data strategy and governance framework in place? Is your data strategy clearly defined and communicated across the organization? Does your data governance framework encompass policies, roles, and responsibilities related to data management? A lack of these can hinder catalog adoption and make it difficult to define what data should be cataloged and how it should be governed.
Are your data stakeholders aligned and committed to the implementation? How will you measure alignment and commitment? Engage stakeholders through workshops, demos, and by highlighting the benefits the data catalog will bring to their specific teams. Without buy-in, adoption will be slow and the catalog may not be effectively utilized.
Do you have the necessary resources (e.g., budget, personnel, technology) to support the implementation? Be specific about the types of personnel needed, such as data stewards to define and maintain metadata, and catalog administrators to manage the platform. Inadequate resources can lead to delays and an incomplete implementation.
Are your data quality and data governance processes mature and well-established? While a data catalog can help improve these, a basic level of maturity is needed for effective implementation. If your data is riddled with errors or governance policies are non-existent, the catalog will reflect these issues.
Sample Dashboard Monitoring Data Maturity
Best Practices for Getting Started
To ensure a successful implementation of a data catalog, follow these best practices:
Start small and realistic: Begin with a pilot project or a small-scale implementation to test and refine your approach. Identify a specific business problem or a department with high data maturity for the pilot. This allows you to learn and adapt before a full-scale rollout.
Engage the right stakeholders: Involve data stakeholders throughout the implementation process to ensure their needs are met and to build buy-in. Recommend creating a cross-functional working group or a dedicated data catalog team with representatives from different business units and IT.
Define clear use cases: Clearly define the primary use cases for your data catalog to ensure it meets the needs of your organization. Prioritize use cases based on business value and feasibility to demonstrate early success and ROI.
Choose the right technology: Select a data catalog solution that aligns with your organization’s metadata management requirements and technology stack. Also choose a data catalog that matches your current but also future needs. Consider factors like integration capabilities with existing systems, user interface, scalability, security, and vendor support. Conduct thorough demos and proof-of-concepts before making a decision.
Monitor and measure: Establish KPIs to monitor and measure the success of your data catalog implementation. Track usage statistics, user feedback, and the impact of the catalog on the defined KPIs to demonstrate value and identify areas for improvement.
Establish ongoing management and governance: Briefly touch upon the importance of continuous maintenance, data stewardship, and evolving the data catalog as the organization’s data landscape changes. Define roles and responsibilities for maintaining the catalog’s accuracy and relevance.
Common Pitfalls to Avoid
When implementing a data catalog, avoid the following common pitfalls:
Lack of clear use cases: Failing to define clear use cases can lead to a data catalog that doesn’t meet the needs of your organization, resulting in a tool that no one uses or finds valuable.
Insufficient stakeholder engagement: Failing to engage stakeholders throughout the implementation process can lead to a lack of buy-in and adoption, resulting in resistance to adoption and a lack of data contribution.
Poor technology choice: Selecting a data catalog solution that doesn’t align with your organization’s metadata management requirements can lead to a failed implementation, causing limitations, performance issues, and ultimately, a failed project.
Inadequate resources: Failing to allocate sufficient resources (e.g., budget, personnel, technology) can lead to a slow or unsuccessful implementation, causing delays, incomplete implementation, and lack of ongoing maintenance.
Conclusion
Implementing a data catalog is a journey, not a destination. By focusing on the foundational elements of understanding your requirements, assessing your organization’s readiness, and adhering to best practices, you can pave the way for a successful implementation that will unlock the true potential of your data assets and empower your organization to make more informed decisions.
Â
CONTACT US
Need expert support to make your data catalog initiative successful? Need help with your overall data agenda? Discover how Datalumen can help you.Â
https://www.datalumen.eu/wp-content/uploads/2025/03/Datalulmen_WhatYouShouldKnowAboutDataCatalogs-scaled.jpg17742560Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2025-03-18 21:04:142025-03-18 21:04:14WHAT YOU SHOULD KNOW BEFORE IMPLEMENTING A DATA CATALOG
In today’s digital age, the importance of cybersecurity and data governance cannot be overstated. With the increasing frequency and sophistication of cyber threats, organizations must adopt robust measures to protect their data and ensure compliance with regulatory requirements. One such regulation that has gained significant attention is the NIS2 Directive. This article explores the link between NIS2 and data governance, highlighting how they work together to enhance cybersecurity and data management practices.
Understanding NIS2
The NIS2 Directive, officially known as the Network and Information Security Directive 2, is a European Union (EU) regulation aimed at strengthening cybersecurity across member states. It builds upon the original NIS Directive introduced in 2016, expanding its scope and requirements to address the evolving threat landscape. NIS2 came into effect on January 16, 2023, and member states had until October 17, 2024, to transpose its measures into national law.
NIS2 focuses on several key areas:
Expanded Scope: NIS2 covers a broader range of sectors, including healthcare, public administration, food supply chains, manufacturing, and digital infrastructure.
Harmonized Requirements: It establishes consistent cybersecurity standards across the EU, ensuring that organizations adopt uniform practices for incident reporting, risk management, and security measures.
Accountability and Governance: NIS2 places a strong emphasis on top-level management accountability, making executives personally liable for non-compliance.
Increased Penalties: Organizations face significant fines for non-compliance, up to €10,000,000 or 2% of global annual revenue.
Although the implementation deadline has passed, the path to full adoption varies across the EU. To provide an overview, here is a map with the transposition status into four distinct stages.
The Role of Data Governance
Data governance is in essense the practice of managing data quality, security, and availability within an organization. It involves defining and implementing policies, standards, and procedures for data collection, ownership, storage, processing, and use. Effective data governance ensures that data is accurate, secure, and accessible for business intelligence, decision-making and other operational purposes.
Key components of data governance include:
Data Quality: Ensuring that data is accurate, complete, and reliable.
Data Security: Protecting data from unauthorized access, breaches, and cyber threats.
Data Availability: Making data accessible to authorized users when needed.
Compliance: Adhering to regulatory requirements and industry standards.
The Link Between NIS2 and Data Governance
NIS2 and data governance are closely intertwined, as both aim to enhance the security and management of data within organizations. Here are some ways in which they are linked:
Risk Management: NIS2 requires organizations to implement robust risk management practices to mitigate cyber threats. Data governance plays a crucial role in this by ensuring that data is properly managed, secured, and monitored for potential risks.
Incident Reporting: NIS2 mandates timely reporting of cybersecurity incidents to relevant authorities3. Effective data governance ensures that organizations have the necessary processes and tools in place to detect, report, and respond to incidents promptly.
Compliance: Both NIS2 and data governance emphasize compliance with regulatory requirements. Organizations must establish policies and procedures to ensure that they meet the standards set by NIS2 and other relevant regulations.
Accountability: NIS2 places accountability on top-level management for cybersecurity practices. Data governance supports this by defining roles and responsibilities for data management, ensuring that executives are aware of their obligations and can be held accountable for non-compliance.
Data Security: NIS2 aims to enhance the security of network and information systems. Data governance complements this by implementing security measures to protect data from breaches and unauthorized access.
Conclusion
The NIS2 Directive and data governance are essential components of a comprehensive cybersecurity strategy. By working together, they help organizations protect their data, mitigate risks, and ensure compliance with regulatory requirements. As cyber threats continue to evolve, the importance of robust data governance and adherence to NIS2 cannot be overstated. Organizations must prioritize these practices to safeguard their data and maintain a high level of cybersecurity.
Â
CONTACT US
Need expert support to make your data security and data governance strategy more solid and minimize risk? Need help with your overall data agenda? Discover how Datalumen can help you.Â
https://www.datalumen.eu/wp-content/uploads/2025/03/NIS2_DataGovernance_Rock_Cybersecurity-scaled.jpg17762560Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2025-03-12 17:32:532025-03-13 10:56:11NIS2 & DATA GOVERNANCE: THE DYNAMIC DUO TO PUT SOME MUSIC IN YOUR CYBERSECURITY
In the world of data management, choosing the right strategy to develop and deploy your solutions can significantly impact your success. Two popular approaches are the Minimum Viable Product (MVP) and the Exceptional Viable Product (EVP). Understanding the differences between these approaches and knowing when to use each can help you make informed decisions for your data management projects.
Understanding MVP in Data Management
The concept of a Minimum Viable Product (MVP) is about creating a basic version of your data management solution with just enough features to satisfy early users and gather valuable feedback. This approach, popularized by Eric Ries in “The Lean Startup,” aims to test core hypotheses and validate demand with minimal investment of time and resources.
Advantages of MVP:
Quick Results & Feedback:Â By releasing a basic version early, you can gather user feedback and make necessary adjustments before investing heavily in development.
Reduced Risk:Â Starting small helps you avoid wasting resources on features that users may not need or want.
Iterative Improvement:Â Continuous feedback allows for iterative improvements, ensuring the final product better meets user needs.
Exploring EVP in Data Management
On the other hand, an Exceptional Viable Product (EVP) focuses on delivering a standout solution that goes above and beyond what’s currently available. The goal is to provide superior value and an unparalleled user experience from day one. This approach requires a deep understanding of your target audience and a relentless focus on innovation and quality.
Advantages of EVP:
High & Broader User Satisfaction:Â By delivering a high-quality product from the start, you can create a loyal user base that advocates for your solution.
Potential Market Differentiation:Â An EVP can generate a broader impact and as a result can help you stand out in a crowded market by offering unique features and exceptional performance.
Long-term Value:Â Investing in a comprehensive solution upfront can lead to long-term benefits and a stronger market position.
Choosing Between MVP and EVP
When deciding between an MVP and an EVP for your data management project, consider the following factors:
Project Goals:Â If your primary goal is to validate an idea quickly and gather user feedback, an MVP might be the best choice. If you aim to make a significant impact and differentiate your solution, an EVP could be more suitable.
Resource Availability:Â Evaluate your available resources, including time, budget, and expertise. An MVP requires fewer resources initially, while an EVP demands a more substantial upfront investment.
Overall Market Conditions:Â Consider the competitive landscape and user expectations. In a highly competitive market, an EVP might help you stand out, whereas an MVP can be effective in less saturated environments.
Conclusion
Both MVP and EVP approaches have their merits in data management. The key is to align your strategy with your project goals, resources, and market conditions. Another important element is your appetite for risk. An MVP tends to support a so-called no-regret move and exposes you to more controlled risk from an investment point of view. By carefully considering these factors, you can choose the approach that best suits your needs and sets your data management project up for success. In general we see a higher preference towards an MVP approach.
Â
CONTACT US
Need expert support to kick off your data management or data governance initiatives? Need help with your overall data agenda? Discover how Datalumen can help you.Â
https://www.datalumen.eu/wp-content/uploads/2025/03/Datalumen_MVP_versus_EVP_DatamanagementL2R-scaled.jpg20482560Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2025-03-05 11:16:162025-03-05 12:10:29MVP VS. EVP: CHOOSING THE RIGHT DATA MANAGEMENT IMPLEMENTATION APPROACH FOR SUCCESS
Understanding the journey of data from its source to its final destination is crucial for businesses and organizations. This journey, known as data lineage, has become increasingly complex with the proliferation of data sources, transformation processes, and analytical tools. Enter OpenLineage, an open-source standard that aims to simplify and standardize data lineage tracking across diverse data ecosystems.
What is data lineage?
Data lineage is the process of tracing the journey of data from its origin to its destination, tracking every transformation, processing step, and the tools or systems it interacts with along the way. With data flowing through increasingly complex architectures, the ability to accurately map and understand these movements is vital for ensuring data quality, compliance, and operational efficiency.
However, tracking data lineage is no small feat, especially with the explosion of data sources, analytics platforms, and transformation tools that make up modern data stacks.
What is OpenLineage?
OpenLineage is an open standard for data lineage collection and analysis. Initiated by Datakin and now part of the Linux Foundation, OpenLineage provides a set of standardized definitions and APIs that allow different tools and platforms in the data ecosystem to share lineage metadata in a consistent format.
The primary goal of OpenLineage is to create a unified approach to collecting and utilizing data lineage information. By establishing a common language for data lineage, OpenLineage enables better interoperability between various data tools, platforms, and processes.
Key Components of OpenLineage
OpenLineage Specification: This defines the core concepts and data model for representing lineage metadata. It includes definitions for jobs, datasets, runs, and the relationships between them.
Integration Libraries: OpenLineage provides libraries and SDKs for popular data processing frameworks like Apache Spark, Apache Airflow, and dbt. These integrations allow developers to easily instrument their data pipelines to emit lineage events.
API: The OpenLineage API defines how lineage events should be structured and transmitted. This standardization ensures that all tools speaking the OpenLineage language can understand and process lineage data consistently.
Facets: These are extensible metadata attributes that can be attached to core OpenLineage entities, allowing for custom metadata to be included in lineage information.
Why should I care about this?
Standardization and Interoperability
One of the most significant advantages of OpenLineage is its ability to standardize lineage data across different tools and platforms. This standardization enables seamless integration between various components of a data stack, from data ingestion tools to transformation engines and analytics platforms. As a result, organizations can build a comprehensive view of their data lineage without being locked into a single vendor or tool.
Enhanced Data Governance and Compliance
With the increasing importance of data privacy regulations like GDPR and the AI Act, understanding data lineage is crucial for compliance. OpenLineage makes it easier to track the flow of sensitive data across systems, helping organizations ensure that data is handled in accordance with regulatory requirements. This comprehensive lineage information also aids in auditing processes and demonstrating compliance to regulatory bodies.
Improved Trust
By providing visibility into the entire data pipeline, OpenLineage helps data teams identify and resolve data quality issues more efficiently. When inconsistencies or errors are discovered, teams can quickly trace the problem back to its source, understanding all the transformations and processes the data has undergone. This transparency builds trust in the data and the insights derived from it.
Efficient Troubleshooting and Debugging
When issues arise in data pipelines or analytics, OpenLineage’s detailed lineage information becomes invaluable. Data engineers and analysts can trace the path of data through various systems, identifying where problems may have occurred. This capability significantly reduces the time and effort required for troubleshooting, leading to faster resolution of data-related issues.
Support for Data Cataloging and Metadata Management
OpenLineage integrates seamlessly with data catalogs and metadata management tools. By providing rich lineage information, it enhances the capabilities of these tools, allowing for more comprehensive documentation of data assets. This integration supports better data discovery, understanding, and utilization across the organization.
Conclusion
OpenLineage represents a significant step forward in the field of data lineage and metadata management. By providing a standardized, open-source approach to tracking data lineage, it addresses many of the challenges faced by modern data-driven organizations. From improving data governance and quality to enhancing troubleshooting capabilities and fostering collaboration, OpenLineage offers a wide range of benefits.
As data ecosystems continue to grow in complexity, tools like OpenLineage will become increasingly crucial. Organizations that adopt OpenLineage can expect to gain a competitive edge through better data management, increased efficiency, and improved data-driven decision-making capabilities.
The open nature of the project ensures that it will continue to evolve and improve, driven by the needs of the data community. As more tools and platforms adopt the OpenLineage standard, we can expect to see even greater interoperability and capabilities in the future of data lineage tracking.
Â
CONTACT US
Need expert support with your data agenda? Discover how Datalumen can help you.Â
https://www.datalumen.eu/wp-content/uploads/2024/09/Datalumen_Openlineage_What_Why_Benefits-scaled.jpg14722560Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2024-08-29 11:01:292024-09-11 11:01:49OPENLINEAGE: UNVEILING DATA LINEAGE FOR MODERN DATA ECOSYSTEMS
Data literacy is no longer a niche skill reserved for data professionals. It’s becoming a core competency required for all employees in forward-looking organizations. Data literacy — the ability to read, write, and communicate data in context — is essential for making informed decisions, driving innovation, and fostering a data-driven culture across the enterprise. It is crucial not only to equip employees with the necessary skills but also to foster a shared mindset and language around data.
The Imperative of a Data Literacy Program
Launching a data literacy program isn’t just about offering a few training sessions. It requires a comprehensive approach that touches every level of the organization. This is an opportunity to grow and amplify an understanding of data management and with extension also artificial intelligence (AI) (and other emerging technologies) within the organization. As these capabilities become increasingly integrated into business processes, the need for an organization that can interpret and leverage these technologies, in an ethical and compliant way, becomes even more critical.
To help organizations successfully launch and sustain a data literacy program, here are some key steps:
Craft a Strong Argument for Transformation Before embarking on a data literacy initiative, it’s vital to establish a compelling reason for change. This involves articulating the strategic importance of data literacy to the organization’s future, aligning the program’s goals with business objectives, and gaining buy-in from leadership and stakeholders. A well-defined case for change will serve as the foundation for all subsequent efforts.
Build a Solid Program Foundation with Targeted Pilots Starting small with targeted pilots can help demonstrate the value of data literacy initiatives. These pilots should be designed to address specific business challenges and provide measurable outcomes. By focusing on practical applications, organizations can build momentum and create a sustainable foundation for the program.
Showcase and Celebrate Successes Highlighting success stories is crucial for building credibility and inspiring broader participation. By showcasing examples of how data literacy has led to positive business outcomes, organizations can encourage more employees to engage with the program. This also helps reinforce the importance of data literacy across the organization.
Foster Connections and Support Isolated Teams In any organization, there are often key individuals or teams who may feel disconnected from the broader data culture. Connecting these communities and providing them with the support they need is essential for fostering a sense of belonging and encouraging active participation in the data literacy program. This can be achieved through internal networks, forums, or mentoring programs.
Integrate Across the Organization to Achieve Sustainable Transformation An effective data literacy program should be integrated with other data culture and training initiatives within the organization. By connecting these efforts, organizations can ensure that employees have access to a cohesive set of resources and training opportunities, enabling them to continuously build their skills and knowledge. Ultimately, the goal is to deliver lasting benefits to the organization, including not only improving individual skills but also embedding a data-driven mindset into the company’s culture. Over time, a strong data culture will lead to better decision-making, increased innovation, and a competitive advantage in the marketplace.
The Path Forward
As organizations continue to navigate the complexities of the digital age, the importance of data literacy cannot be overstated. By following these six steps, companies can build a data literacy program that empowers their employees, drives cultural transformation, and ensures long-term success in an increasingly data-driven world.
Investing in data literacy is not just about upskilling employees; it’s about preparing the entire organization for the future. Whether you’re just starting on this journey or looking to enhance existing efforts, it is fundamental to approach data literacy with intention, commitment, and a clear vision for the future.
CONTACT US
Need expert support with your data agenda? Discover how Datalumen can help you.Â
https://www.datalumen.eu/wp-content/uploads/2024/08/Datalumen_Data_Literacy-scaled.jpg17072560Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2024-08-21 13:08:232024-08-21 13:08:23ESTABLISHING ROBUST DATA LITERACY – FROM AWARENESS TO ACTION
Data & Analytics (D&A) leaders need to demonstrate the tangible business value from their D&A and AI initiatives, including the rapidly evolving field of Generative AI (GenAI). As organizations strive to maximize the potential of their data assets, many are turning to innovative solutions like data marketplaces and exchanges. These platforms offer a powerful means to accelerate both tangible and intangible financial value from data use while meeting the growing demands for expansive data sharing and monetization.
The Data Value Dilemma
D&A leaders are under increasing pressure to show concrete returns on investment in data and AI technologies. However, quantifying the value of data assets and AI outcomes can be rather challenging. Traditional metrics often fall short in capturing the full spectrum of benefits that data-driven initiatives bring to an organization.
Enter Data Marketplaces – The Storefront for Data Consumers
Within data marketplaces, data is exchanged between providers and consumers. Data providers aim to share data, data products, or data services with users. Data marketplaces and exchanges provide a structured framework for organizations to share, trade, and monetize their data assets. These platforms typically offer a wide variety of information, ranging from market and business research and intelligence to demographic data, marketing and advertising data, scientific data, and much more.
Data providers often seek to monetize their data assets. Consumers enter data marketplaces looking for data that can benefit their business. For example, a GPS navigation company could be a data provider offering traffic-related data such as historical congestion and emissions reports to consumers on public data marketplaces. Data consumers can then use this traffic data to meet their specific business needs, such as helping a retail business optimize traffic planning or gain better insights into their sustainability indicators.
Considering who provides the data, these platforms come in two primary forms:
Internally managed Internally managed data marketplaces facilitate data sharing and collaboration within an organization. While primarily set up for internal use, many of these marketplaces can also consume data from external data markets and exchanges to some degree. Today, over 70% of internally managed marketplaces serve only internal consumers. About 30% of these marketplaces are already monetizing their data and commercializing it on the external market. For example, retailers use their internal data marketplaces to commercialize consumer data to their FMCG suppliers.
Externally managed These data marketplaces, also referred to as data exchanges, enable data transactions between different organizations. Examples of data exchanges include the Nielsen Marketing Cloud, Dun & Bradstreet, Precisely and Experian. These platforms offer a wide range of data types, including demographic and psychographic information, consumer behavior and preferences, purchasing history, and credit information. In addition to these commercial platforms, more public and open data is becoming available. Examples include data.europe.be the portal for European data, as well as numerous national and local gov, market-specific, and even organizational initiatives like i.e. the Infrabel Open Data Portal , which can be integrated in your data initiatives.
Unlocking the Advantages
By leveraging these platforms, businesses can unlock several key advantages:
Enhanced Data Discovery and Access Data marketplaces make it easier for users across an organization to find and access relevant data sets. This improved discoverability can lead to faster decision-making processes, reduced duplication of efforts, and increased cross-departmental collaboration.
Data Monetization Opportunities External data exchanges open up new revenue streams by allowing organizations to monetize their data assets. This can include selling anonymized customer insights, offering industry-specific datasets, and providing real-time data feeds. The same principle can also be applied to internal data sharing efforts where departments or sister companies also agree on an inter-company cost compensation mechanism.
Improved Data Quality and Governance To participate in data marketplaces, organizations must adhere to certain quality standards and governance practices. This drive towards better data management can result in enhanced data accuracy and reliability, stronger compliance with data regulations, and increased trust in data-driven decision making.
Accelerated Innovation Access to diverse datasets through marketplaces can fuel innovation, especially in AI and GenAI applications. Benefits include more comprehensive training data for AI models, novel insights from combining internal and external data sources, and faster development of data-driven products and services.
Overcoming Implementation Challenges
While the potential benefits are significant, implementing data marketplaces and exchanges comes with its own set of challenges. These include ensuring data privacy and security while enabling sharing, establishing common data formats and exchange protocols, determining fair pricing models for data assets, and fostering a data-sharing mindset within the organization. To address these challenges, D&A leaders should invest in robust data governance frameworks, collaborate with legal and compliance teams to navigate regulatory landscapes, develop clear data valuation methodologies, and promote a culture of data sharing and collaboration through change management initiatives.
Measuring Success
To demonstrate the value of data marketplaces and exchanges, D&A leaders should focus on both quantitative and qualitative metrics. These can include revenue generated from data monetization, cost savings from improved data access and reduced duplication, time-to-insight measurements for data-driven projects, user adoption rates of internal data marketplaces, and innovation metrics such as new products or services developed using shared data.
Conclusion
As the demand for data-driven insights continues to grow, data marketplaces and exchanges offer a powerful solution for organizations looking to maximize the value of their data assets. By facilitating easier data sharing, enabling new monetization opportunities, and driving innovation, these platforms can help D&A leaders demonstrate clear business value from their initiatives.
The journey to implementing successful data marketplaces and exchanges may be complex, but the potential rewards – in terms of financial value, operational efficiency, and competitive advantage – make it a worthwhile endeavor for forward-thinking organizations. As we move further into the age of AI and GenAI, those who can effectively leverage these data-sharing ecosystems will be well-positioned to thrive in an increasingly data-centric business world.
Â
CONTACT US
Need expert support with your data agenda? Discover how Datalumen can help you.Â
https://www.datalumen.eu/wp-content/uploads/2024/07/Datalumen_Data_Marketplace2-scaled.jpg17072560Dimitri Maesfranckxhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDimitri Maesfranckx2024-07-10 10:43:592024-08-22 11:01:48THE DATA SHARING IMPERATIVE: WHY DATA MARKETPLACES ARE YOUR NEXT BIG MOVE
Databricks‘ acquisition of Tabular puts pressure on Snowflake and Confluent as cloud data management becomes crucial for AI initiatives. Databricks recently acquired Tabular for an estimated $1 to $2 billion and was strategically announced during main competitor’s Snowflake annual conference. This move highlights the growing importance of cloud data management for AI applications, and how Tabular’s role in the open-source project Apache Iceberg makes them a strategic asset.
Iceberg: A Key-component in Data Management for AI
Iceberg is an open-source project that simplifies data sharing across cloud platforms and on-premises infrastructure. As AI applications become widespread, managing the data they require becomes a critical challenge. Iceberg acts as an abstraction layer, allowing data to flow seamlessly between various cloud storage services and analytics engines.
Tabular: The Iceberg Leader
Tabular’s founders played a key role in developing Iceberg and are the project’s largest contributors. Their acquisition by Databricks positions Databricks as the leader in Iceberg development. This strategic advantage could significantly impact the future of cloud data management.
Snowflake under pressure?
Snowflake, a major competitor of Databricks, has also developed tools for working with Iceberg. The bidding war for Tabular indicates companies see Iceberg as a strategic asset and potential threat. Snowflake’s recent stock price decline and leadership changes further highlight the pressure they face. Snowflake is BTW not the only relevant competitor with Iceberg connected solutions. Confluent, also mentioned as a Tabular M&A candidate, Microsoft, and others can also push data into Iceberg use Apache Flink.
The Future of Cloud Data Management
Databricks’ acquisition of Tabular presents a significant challenge to Snowflake and other competitors. How Databricks leverages Iceberg will be crucial in determining the leader in cloud data management for the AI era. This situation underscores the ever-evolving nature of the technology landscape, where younger startups can quickly disrupt established players.
Conclusion
Cloud data management is critical for AI applications.
Iceberg is a key open-source project for data management.
Databricks’ acquisition of Tabular gives them a strategic advantage in Iceberg development.
Competitors face pressure to adapt to the changing landscape.
Â
CONTACT US
Need expert support with your data platform approach? Discover how Datalumen can help you.Â
https://www.datalumen.eu/wp-content/uploads/2024/06/Datalumen_Databricks_vs_Snowflake_Tabular-scaled.jpg17062560Datalumenhttps://www.datalumen.eu/wp-content/uploads/2019/06/datalumen_color_V1.1-340-156B.pngDatalumen2024-06-12 11:59:182024-07-09 23:45:03DATABRICKS VS. SNOWFLAKE: THE BATTLE FOR CLOUD DATA MANAGEMENT HEATS UP WITH TABULAR BUY
Your privacy is important & we are committed to being transparent. By continuing to browse the site, you are agreeing to our use of cookies. To provide the best experiences, we use technologies like cookies to store and/or access device information.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Datalumen communication contains research, analysis or marketing information. You can always unsubribe and correction of your contact details may be made by using the Contact us link. Our Privacy Policy is available online.
window.cfields = {"1":"mobile_phone","5":"gdpr_consent"};
window._show_thank_you = function(id, message, trackcmp_url, email) {
var form = document.getElementById('_form_' + id + '_'), thank_you = form.querySelector('._form-thank-you');
form.querySelector('._form-content').style.display = 'none';
thank_you.innerHTML = message;
thank_you.style.display = 'block';
const vgoAlias = typeof visitorGlobalObjectAlias === 'undefined' ? 'vgo' : visitorGlobalObjectAlias;
var visitorObject = window[vgoAlias];
if (email && typeof visitorObject !== 'undefined') {
visitorObject('setEmail', email);
visitorObject('update');
} else if (typeof(trackcmp_url) != 'undefined' && trackcmp_url) {
// Site tracking URL to use after inline form submission.
_load_script(trackcmp_url);
}
if (typeof window._form_callback !== 'undefined') window._form_callback(id);
};
window._show_unsubscribe = function(id, message, trackcmp_url, email) {
var form = document.getElementById('_form_' + id + '_'), unsub = form.querySelector('._form-thank-you');
var branding = form.querySelector('._form-branding');
if (branding) {
branding.style.display = 'none';
}
form.querySelector('._form-content').style.display = 'none';
unsub.style.display = 'block';
form.insertAdjacentHTML('afterend', message)
const vgoAlias = typeof visitorGlobalObjectAlias === 'undefined' ? 'vgo' : visitorGlobalObjectAlias;
var visitorObject = window[vgoAlias];
if (email && typeof visitorObject !== 'undefined') {
visitorObject('setEmail', email);
visitorObject('update');
} else if (typeof(trackcmp_url) != 'undefined' && trackcmp_url) {
// Site tracking URL to use after inline form submission.
_load_script(trackcmp_url);
}
if (typeof window._form_callback !== 'undefined') window._form_callback(id);
};
window._show_error = function(id, message, html) {
var form = document.getElementById('_form_' + id + '_'),
err = document.createElement('div'),
button = form.querySelector('button'),
old_error = form.querySelector('._form_error');
if (old_error) old_error.parentNode.removeChild(old_error);
err.innerHTML = message;
err.className = '_error-inner _form_error _no_arrow';
var wrapper = document.createElement('div');
wrapper.className = '_form-inner';
wrapper.appendChild(err);
button.parentNode.insertBefore(wrapper, button);
var submitButton = form.querySelector('[id^="_form"][id$="_submit"]');
submitButton.disabled = false;
submitButton.classList.remove('processing');
if (html) {
var div = document.createElement('div');
div.className = '_error-html';
div.innerHTML = html;
err.appendChild(div);
}
};
window._show_pc_confirmation = function(id, header, detail, show, email) {
var form = document.getElementById('_form_' + id + '_'), pc_confirmation = form.querySelector('._form-pc-confirmation');
if (pc_confirmation.style.display === 'none') {
form.querySelector('._form-content').style.display = 'none';
pc_confirmation.innerHTML = "
var hideButton = document.getElementById('hideButton');
// Add event listener to the button
hideButton.addEventListener('click', function() {
var submitButton = document.querySelector('#_form_15_submit');
submitButton.disabled = false;
submitButton.classList.remove('processing');
var mp = document.querySelector('input[name="mp"]');
mp.value = '1';
window.location.href = window.location.href;
});
const vgoAlias = typeof visitorGlobalObjectAlias === 'undefined' ? 'vgo' : visitorGlobalObjectAlias;
var visitorObject = window[vgoAlias];
if (email && typeof visitorObject !== 'undefined') {
visitorObject('setEmail', email);
visitorObject('update');
} else if (typeof(trackcmp_url) != 'undefined' && trackcmp_url) {
// Site tracking URL to use after inline form submission.
_load_script(trackcmp_url);
}
if (typeof window._form_callback !== 'undefined') window._form_callback(id);
};
window._load_script = function(url, callback, isSubmit) {
var head = document.querySelector('head'), script = document.createElement('script'), r = false;
var submitButton = document.querySelector('#_form_15_submit');
script.charset = 'utf-8';
script.src = url;
if (callback) {
script.onload = script.onreadystatechange = function() {
if (!r && (!this.readyState || this.readyState == 'complete')) {
r = true;
callback();
}
};
}
script.onerror = function() {
if (isSubmit) {
if (script.src.length > 10000) {
_show_error("15", "Sorry, your submission failed. Please shorten your responses and try again.");
} else {
_show_error("15", "Sorry, your submission failed. Please try again.");
}
submitButton.disabled = false;
submitButton.classList.remove('processing');
}
}
head.appendChild(script);
};
(function() {
if (window.location.search.search("excludeform") !== -1) return false;
var getCookie = function(name) {
var match = document.cookie.match(new RegExp('(^|; )' + name + '=([^;]+)'));
return match ? match[2] : null;
}
var setCookie = function(name, value) {
var now = new Date();
var time = now.getTime();
var expireTime = time + 1000 * 60 * 60 * 24 * 365;
now.setTime(expireTime);
document.cookie = name + '=' + value + '; expires=' + now + ';path=/; Secure; SameSite=Lax;';
}
var addEvent = function(element, event, func) {
if (element.addEventListener) {
element.addEventListener(event, func);
} else {
var oldFunc = element['on' + event];
element['on' + event] = function() {
oldFunc.apply(this, arguments);
func.apply(this, arguments);
};
}
}
var _removed = false;
var form_to_submit = document.getElementById('_form_15_');
var allInputs = form_to_submit.querySelectorAll('input, select, textarea'), tooltips = [], submitted = false;
var getUrlParam = function(name) {
if (name.toLowerCase() !== 'email') {
var params = new URLSearchParams(window.location.search);
return params.get(name) || false;
}
// email is a special case because a plus is valid in the email address
var qString = window.location.search;
if (!qString) {
return false;
}
var parameters = qString.substr(1).split('&');
for (var i = 0; i < parameters.length; i++) {
var parameter = parameters[i].split('=');
if (parameter[0].toLowerCase() === 'email') {
return parameter[1] === undefined ? true : decodeURIComponent(parameter[1]);
}
}
return false;
};
var acctDateFormat = "%d/%m/%Y";
var getNormalizedDate = function(date, acctFormat) {
var decodedDate = decodeURIComponent(date);
if (acctFormat && acctFormat.match(/(%d|%e).*%m/gi) !== null) {
return decodedDate.replace(/(d{2}).*(d{2}).*(d{4})/g, '$3-$2-$1');
} else if (Date.parse(decodedDate)) {
var dateObj = new Date(decodedDate);
var year = dateObj.getFullYear();
var month = dateObj.getMonth() + 1;
var day = dateObj.getDate();
return `${year}-${month < 10 ? `0${month}` : month}-${day < 10 ? `0${day}` : day}`;
}
return false;
};
var getNormalizedTime = function(time) {
var hour, minutes;
var decodedTime = decodeURIComponent(time);
var timeParts = Array.from(decodedTime.matchAll(/(d{1,2}):(d{1,2})W*([AaPp][Mm])?/gm))[0];
if (timeParts[3]) { // 12 hour format
var isPM = timeParts[3].toLowerCase() === 'pm';
if (isPM) {
hour = parseInt(timeParts[1]) === 12 ? '12' : `${parseInt(timeParts[1]) + 12}`;
} else {
hour = parseInt(timeParts[1]) === 12 ? '0' : timeParts[1];
}
} else { // 24 hour format
hour = timeParts[1];
}
var normalizedHour = parseInt(hour) < 10 ? `0${parseInt(hour)}` : hour;
var minutes = timeParts[2];
return `${normalizedHour}:${minutes}`;
};
for (var i = 0; i < allInputs.length; i++) {
var regexStr = "field[(d+)]";
var results = new RegExp(regexStr).exec(allInputs[i].name);
if (results != undefined) {
allInputs[i].dataset.name = allInputs[i].name.match(/[time]$/)
? `${window.cfields[results[1]]}_time`
: window.cfields[results[1]];
} else {
allInputs[i].dataset.name = allInputs[i].name;
}
var fieldVal = getUrlParam(allInputs[i].dataset.name);
if (fieldVal) {
if (allInputs[i].dataset.autofill === "false") {
continue;
}
if (allInputs[i].type == "radio" || allInputs[i].type == "checkbox") {
if (allInputs[i].value == fieldVal) {
allInputs[i].checked = true;
}
} else if (allInputs[i].type == "date") {
allInputs[i].value = getNormalizedDate(fieldVal, acctDateFormat);
} else if (allInputs[i].type == "time") {
allInputs[i].value = getNormalizedTime(fieldVal);
} else {
allInputs[i].value = fieldVal;
}
}
}
var remove_tooltips = function() {
for (var i = 0; i < tooltips.length; i++) {
tooltips[i].tip.parentNode.removeChild(tooltips[i].tip);
}
tooltips = [];
};
var remove_tooltip = function(elem) {
for (var i = 0; i < tooltips.length; i++) {
if (tooltips[i].elem === elem) {
tooltips[i].tip.parentNode.removeChild(tooltips[i].tip);
tooltips.splice(i, 1);
return;
}
}
};
var create_tooltip = function(elem, text) {
var tooltip = document.createElement('div'),
arrow = document.createElement('div'),
inner = document.createElement('div'), new_tooltip = {};
if (elem.type != 'radio' && elem.type != 'checkbox') {
tooltip.className = '_error';
arrow.className = '_error-arrow';
inner.className = '_error-inner';
inner.innerHTML = text;
tooltip.appendChild(arrow);
tooltip.appendChild(inner);
elem.parentNode.appendChild(tooltip);
} else {
tooltip.className = '_error-inner _no_arrow';
tooltip.innerHTML = text;
elem.parentNode.insertBefore(tooltip, elem);
new_tooltip.no_arrow = true;
}
new_tooltip.tip = tooltip;
new_tooltip.elem = elem;
tooltips.push(new_tooltip);
return new_tooltip;
};
var resize_tooltip = function(tooltip) {
var rect = tooltip.elem.getBoundingClientRect();
var doc = document.documentElement,
scrollPosition = rect.top - ((window.pageYOffset || doc.scrollTop) - (doc.clientTop || 0));
if (scrollPosition < 40) {
tooltip.tip.className = tooltip.tip.className.replace(/ ?(_above|_below) ?/g, '') + ' _below';
} else {
tooltip.tip.className = tooltip.tip.className.replace(/ ?(_above|_below) ?/g, '') + ' _above';
}
};
var resize_tooltips = function() {
if (_removed) return;
for (var i = 0; i < tooltips.length; i++) {
if (!tooltips[i].no_arrow) resize_tooltip(tooltips[i]);
}
};
var validate_field = function(elem, remove) {
var tooltip = null, value = elem.value, no_error = true;
remove ? remove_tooltip(elem) : false;
if (elem.type != 'checkbox') elem.className = elem.className.replace(/ ?_has_error ?/g, '');
if (elem.getAttribute('required') !== null) {
if (elem.type == 'radio' || (elem.type == 'checkbox' && /any/.test(elem.className))) {
var elems = form_to_submit.elements[elem.name];
if (!(elems instanceof NodeList || elems instanceof HTMLCollection) || elems.length <= 1) {
no_error = elem.checked;
}
else {
no_error = false;
for (var i = 0; i < elems.length; i++) {
if (elems[i].checked) no_error = true;
}
}
if (!no_error) {
tooltip = create_tooltip(elem, "Please select an option.");
}
} else if (elem.type =='checkbox') {
var elems = form_to_submit.elements[elem.name], found = false, err = [];
no_error = true;
for (var i = 0; i < elems.length; i++) {
if (elems[i].getAttribute('required') === null) continue;
if (!found && elems[i] !== elem) return true;
found = true;
elems[i].className = elems[i].className.replace(/ ?_has_error ?/g, '');
if (!elems[i].checked) {
no_error = false;
elems[i].className = elems[i].className + ' _has_error';
err.push("Checking %s is required".replace("%s", elems[i].value));
}
}
if (!no_error) {
tooltip = create_tooltip(elem, err.join(' '));
}
} else if (elem.tagName == 'SELECT') {
var selected = true;
if (elem.multiple) {
selected = false;
for (var i = 0; i < elem.options.length; i++) {
if (elem.options[i].selected) {
selected = true;
break;
}
}
} else {
for (var i = 0; i < elem.options.length; i++) {
if (elem.options[i].selected
&& (!elem.options[i].value
|| (elem.options[i].value.match(/n/g)))
) {
selected = false;
}
}
}
if (!selected) {
elem.className = elem.className + ' _has_error';
no_error = false;
tooltip = create_tooltip(elem, "Please select an option.");
}
} else if (value === undefined || value === null || value === '') {
elem.className = elem.className + ' _has_error';
no_error = false;
tooltip = create_tooltip(elem, "This field is required.");
}
}
if (no_error && (elem.id == 'field[]' || elem.id == 'ca[11][v]')) {
if (elem.className.includes('phone-input-error')) {
elem.className = elem.className + ' _has_error';
no_error = false;
}
}
if (no_error && elem.name == 'email') {
if (!value.match(/^[+_a-z0-9-'&=]+(.[+_a-z0-9-']+)*@[a-z0-9-]+(.[a-z0-9-]+)*(.[a-z]{2,})$/i))
{
elem.className = elem.className + ' _has_error';
no_error = false;
tooltip = create_tooltip(elem, "Enter a valid email address.");
} else
{
// New code to reject hotmail.com, yahoo.com, gmail.com addresses and addresses containing the firstname field as the domain
var domain = value.split('@')[1].toLowerCase();
var firstname = document.getElementById('firstname').value.toLowerCase();
var lastname = document.getElementById('lastname').value.toLowerCase();
if (domain === 'hotmail' || domain === 'aol' || domain === 'gmail' || domain === 'yahoo' || domain === 'live' || domain === 'outlook' || domain === 'msn.com' || domain.includes(firstname) || domain.includes(lastname))
{
elem.className = elem.className + ' _has_error';
no_error = false;
tooltip = create_tooltip(elem, "We do not accept private email addresses. Please submit your valid business email.");
}
}
}
if (no_error && /date_field/.test(elem.className)) {
if (!value.match(/^dddd-dd-dd$/)) {
elem.className = elem.className + ' _has_error';
no_error = false;
tooltip = create_tooltip(elem, "Enter a valid date.");
}
}
tooltip ? resize_tooltip(tooltip) : false;
return no_error;
};
var needs_validate = function(el) {
if(el.getAttribute('required') !== null){
return true
}
if(el.name === 'email' && el.value !== ""){
return true
}
if((el.id == 'field[]' || el.id == 'ca[11][v]') && el.className.includes('phone-input-error')){
return true
}
return false
};
var validate_form = function(e) {
var err = form_to_submit.querySelector('._form_error'), no_error = true;
if (!submitted) {
submitted = true;
for (var i = 0, len = allInputs.length; i < len; i++) {
var input = allInputs[i];
if (needs_validate(input)) {
if (input.type == 'tel') {
addEvent(input, 'blur', function() {
this.value = this.value.trim();
validate_field(this, true);
});
}
if (input.type == 'text' || input.type == 'number' || input.type == 'time') {
addEvent(input, 'blur', function() {
this.value = this.value.trim();
validate_field(this, true);
});
addEvent(input, 'input', function() {
validate_field(this, true);
});
} else if (input.type == 'radio' || input.type == 'checkbox') {
(function(el) {
var radios = form_to_submit.elements[el.name];
for (var i = 0; i < radios.length; i++) {
addEvent(radios[i], 'click', function() {
validate_field(el, true);
});
}
})(input);
} else if (input.tagName == 'SELECT') {
addEvent(input, 'change', function() {
validate_field(this, true);
});
} else if (input.type == 'textarea'){
addEvent(input, 'input', function() {
validate_field(this, true);
});
}
}
}
}
remove_tooltips();
for (var i = 0, len = allInputs.length; i 31 && (charCode 57) && charCode !== 8) {
e.preventDefault();
}
});
};
var showPhoneInputError = function(inputId) {
var errorMessage = document.getElementById("error-msg-" + inputId);
var input = document.getElementById(inputId);
errorMessage.classList.add("phone-error");
errorMessage.classList.remove("phone-error-hidden");
input.classList.add("phone-input-error");
};
window['recaptcha_callback'] = function() {
// Get all recaptchas in the DOM (there may be more than one form on the page).
var recaptchas = document.getElementsByClassName("g-recaptcha");
for (var i in recaptchas) {
// Set the recaptcha element ID, so the recaptcha can be applied to each element.
var recaptcha_id = "recaptcha_" + i;
recaptchas[i].id = recaptcha_id;
var el = document.getElementById(recaptcha_id);
if (el != null) {
var sitekey = el.getAttribute("data-sitekey");
var stoken = el.getAttribute("data-stoken");
grecaptcha.render(recaptcha_id, {"sitekey":sitekey,"stoken":stoken});
}
}
}; _load_script(
"https://www.google.com/recaptcha/api.js?onload=recaptcha_callback&render=explicit"
);
var _form_serialize = function(form){if(!form||form.nodeName!=="FORM"){return }var i,j,q=[];for(i=0;i<form.elements.length;i++){if(form.elements[i].name===""){continue}switch(form.elements[i].nodeName){case"INPUT":switch(form.elements[i].type){case"tel":q.push(form.elements[i].name+"="+encodeURIComponent(form.elements[i].previousSibling.querySelector('div.iti__selected-dial-code').innerText)+encodeURIComponent(" ")+encodeURIComponent(form.elements[i].value));break;case"text":case"number":case"date":case"time":case"hidden":case"password":case"button":case"reset":case"submit":q.push(form.elements[i].name+"="+encodeURIComponent(form.elements[i].value));break;case"checkbox":case"radio":if(form.elements[i].checked){q.push(form.elements[i].name+"="+encodeURIComponent(form.elements[i].value))}break;case"file":break}break;case"TEXTAREA":q.push(form.elements[i].name+"="+encodeURIComponent(form.elements[i].value));break;case"SELECT":switch(form.elements[i].type){case"select-one":q.push(form.elements[i].name+"="+encodeURIComponent(form.elements[i].value));break;case"select-multiple":for(j=0;j {
if (key !== 'hideButton') {
formData.append(key, value);
}
//formData.append(key, value);
});
let request = {
headers: {
"Accept": "application/json"
},
body: formData,
method: "POST"
};