Sunday 31 July 2022

Customer-driven digital marketing: Generate incremental revenues through real-time AI-driven analytics and campaign steering

IBM Exam, IBM, IBM Exam Prep, IBM Career, IBM Tutorial and Materials, IBM Skills, IBM Jobs, IBM Preparation

According to a 2021 study, 46% of marketing decisions are not yet influenced by analytics. Many marketing departments still need days or even weeks to compile reliable data. That is too long to make ad hoc, agile, and valid decisions in the post-pandemic new normal. Rather than making decisions based on trial and observation, all available marketing data needs to be compiled into a single dashboard.

This dashboard enables teams to monitor all KPIs constantly, optimize campaigns across all channels, and proactively identify trends and eliminate anomalies that could negatively affect the marketing campaign’s success. The combination of data from multiple sources and the improvement of cross-channel attribution is paramount to be able to fully understand the market and the customers.

Measure performance in real time with individual data sets

Measuring campaign performance channel by channel is not sufficient. With the increasing number of channels (the web, apps, CRM, social media, sales, paid media and more), it is just not possible to analyze results and to provide a holistic report in real time. Instead of creating dedicated data teams, data can be displayed in real time to meet the needs of each respective marketing team member. The individual data set, supported by AI, enables the individual to respond with agility to any event that requires an adjustment. A good system constantly monitors the results based on classic marketing KPIs, ROI and revenues. Team members can identify underlying negative trends before they have an impact on marketing campaigns, revenues or the business in general.

Augmented analytics allow for a highly proactive approach, applying machine learning to uncover deep insights within potentially vast amounts of data. This leads to a more objective and predictive approach to data discovery, automatically identifying patterns and trends that humans may never uncover. Additionally, this process provides insights into these patterns’ causes and relevance. AI can be used to identify highly specific audience segments, outlining their preferences and pain points, as well as predicting their buying patterns. It unveils bias within data sets stemming from unconscious human preconceptions or flawed data collection techniques, helping to avoid a negative performance impact.

Combine modeling with data analytics for quantitative insights

These meaningful analytics enable marketers to steer campaigns in a granular and revenue-driven style. But do they prove the effects of brand awareness and its conversion into revenue? To demonstrate the ratios between brand awareness, brand sympathy, willingness to buy, marketing campaigns and revenue attribution, teams combine modeling with data analytics. Attribution modeling mirrors the customer journey. It reveals which parts of the journey the customer prefers and which parts need to be enhanced. CMOs can extract the correlation between the multi-channel setup and customer touchpoints and show how they convert.

Many marketing budgets were cut during the pandemic. Thanks to the long-time investment in marketing digitalization, enterprises will be better prepared to manage future crises and make educated decisions about cutbacks. The goal is to be agile and able to re-prioritize quickly. Real-time 360-degree data that reveals the performance of all campaigns across multiple KPIs must be in place. These meaningful analytics provide quantitative insights that enrich and guide marketing team discussions.

By regularly analyzing data and taking action to adjust when needed to drive results, marketers can achieve desired ROI and efficiency. According to our IBM C-Suite study in 2021, only 9% of surveyed C-suite executives create high value from data and have a high level of integration. The most successful organizations will be those that are willing and able to adapt to the disruption caused by data-based decision making. The good news: If they act now, CMOs still have a good chance to surpass their competition.

Source: ibm.com

Thursday 28 July 2022

AIOps reimagines hybrid multicloud platform operations

IBM Exam, IBM Exam Prep, IBM, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM AIOps, IBM Exam Study, IBM News, IBM Cloud Service Provider, IBM Hybrid, IBM Platform

Today, most enterprises use services from more than one Cloud Service Provider (CSP). Getting operational visibility across all vendors is a common pain point for clients. Further, modern architecture such as a microservices architecture introduces additional operational complexity.

IBM Exam, IBM Exam Prep, IBM, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM AIOps, IBM Exam Study, IBM News, IBM Cloud Service Provider, IBM Hybrid, IBM Platform
Figure 1 Hybrid Multicloud and Complexity Evolution

Traditionally this calls for more manpower. But this traditional approach introduces more challenges. As shown in the following diagram, an issue in the environment triggers several events across the full stack of the business solution. This results in an unmanageable event flood. Moreover, there are often duplicate events due to full-stack level observability and these events result in data silos.

IBM Exam, IBM Exam Prep, IBM, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM AIOps, IBM Exam Study, IBM News, IBM Cloud Service Provider, IBM Hybrid, IBM Platform
Figure 2 IT Service Management Complexity

IT is a critical part of every enterprise today, and even a small service outage directly affects the top line. Consequently, it is not uncommon for clients to ask for a 30-minute resolution commitment when something goes wrong. This is usually not enough time for a human to resolve an issue.

What is the solution?


This is where AIOps comes to the rescue, preventing these issues before they occur. AIOps is the application of artificial intelligence (AI) to enhance IT operations. Specifically, AIOps uses big data, analytics, and machine learning capabilities to do the following:

◉ Collect and aggregate the huge and ever-increasing volumes of operations data generated by multiple IT infrastructure components, applications and performance-monitoring tools

◉ Identify significant events and patterns related to system performance and availability issues

◉ Diagnose root causes and report them to IT for rapid response and remediation, or automatically resolve these issues without human intervention

By replacing multiple manual IT operations tools with an intelligent, automated platform, AIOps enables IT operations teams to respond more quickly and proactively to slowdowns and outages, with less effort. It bridges the gap between an increasingly difficult-to-monitor IT landscape and user expectations for little to no interruption in application performance and availability. Most experts consider AIOps the future of IT operations management.

How could we reimagine cloud service management and operations with AI?


Refer to the lower part of the diagram below (box 3: Environment), which represents the environments where the workloads run. Continuous releases and deployments of these applications are typically achieved through the continuous delivery process and tooling that is shown on the left side of the diagram (box 2: Continuous Delivery).

IBM Exam, IBM Exam Prep, IBM, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM AIOps, IBM Exam Study, IBM News, IBM Cloud Service Provider, IBM Hybrid, IBM Platform
Figure 3 AI Infused DevSecOps and IT Control Tower

The applications continuously send telemetry information into the operational management tooling (box 4: Continuous Operations). Both the continuous delivery tooling and the continuous operations tooling ingest all the data into the AIOps engine shown at the top (box 7: AIOps Engine). The AIOps engine is focused on addressing four key things:

1. Descriptive analytics to show what happened in an environment
2. Diagnostics to show why it happened
3. Predictive analytics to show what will happen next
4. Prescriptive analytics to show how to achieve or prevent the prediction

In addition to this, enterprise-specific data sources such as a shift roster, SME skill matrix or knowledge repository enrich the AIOps engine (box 1: Enterprise specific data).

Additionally, the AIOps engine consumes public domain data such as open-source communities, product documentations and sentiments from social networks (box 6: Public domain content). ChatOps and Runbook Automation ingest the insights and the automation that the AI system produces and leverage it to establish the new day in the life of an incident (box 5: Continuous Operations). ChatOps brings humans and chatbots for conversation-driven collaboration or conversation-driven DevOps. Additionally, the AIOps engine also dynamically reconfigures the DevSecOps tools, providing continuous delivery and continuous operations through AI-derived policy ingestion.

Several products in the marketplace have already evolved to provide AIOps capabilities such as an anomaly detection feature. This framework consumes the outcomes provided by these AIOps engines (denoted as edge analytics in Figure 3) and combines multiple sources to provide an enterprise-level view.

IT processes such as incident/problem-resolution processes are ad hoc in nature. They differ greatly from structured business processes such as loan approval processes or claim settlement processes. IT processes have stringent SLAs due to the high cost of outage to the business, and the persona involved collaborate intensely and interact with disparate tools to accomplish their goals. Applying business process automation technologies to IT processes will not yield high productivity benefits. ChatOps have transformed the way ITOps teams collaborate to resolve IT incidents. AIOps and ChatOps are the appropriate tools to drive productivity in IT processes. ChatOps enhances the collaboration experience of SRE with other personas participating in IT processes. AIOps delivers insights for SRE to accelerate incident resolution process.

In a nutshell, as clients undertake large digital transformation programs based on a hybrid cloud (or multicloud) architecture, IT Operations needs to be reimagined. With ever increasing complexity, AIOps is an indispensable.

Source: ibm.com

Tuesday 26 July 2022

Data fabric marketplace: The heart of data economy

IBM Exam Study, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Preparation, IBM News, IBM Materials

In older civilizations, where transportation and communication were primitive, the marketplace was where people came to buy and sell products. This was the only way to know what was on offer and who needed it. Modern-day enterprises face a similar situation regarding data assets. On one side there is a need for data. Businesses ask: “Do we have this kind of data in the enterprise?” “How do we get that data?” “Can I trust that data?” On other side, enterprises and organizations sit on piles of data, and they have no clue that others need it and are ready to pay. Like a medieval marketplace, a data marketplace can bring these two sides together to trade.

This discussion is more relevant with the advent of data fabric. Data fabric is a distributed heterogeneous architecture that makes data available in the right shape, at the right time and place. A data marketplace is often the first step toward the data fabric vision of an enterprise. A data marketplace tops our major clients’ wish lists, a trend also observed by industry analysts. For example, Deloitte identified data sharing made easy as one of the top seven technology trends. Gartner predicts that by 2023, organizations that promote data sharing will outperform their peers in most business metrics.

Why is marketplace the centerpiece of data fabric?

The main purpose of data fabric is to make data sharing easier. Today, when data sharing roughly equates to data copying, enterprises spend a lot to move data from one place to another and to curate the data to make it fit for purpose. This long journey of data discovery and processing can lengthen the application development lifecycle or delay insight delivery. Enterprises must address the inefficiencies to remain competitive. Business users and decision makers should be able to discover and explore the data by themselves to perform their jobs through self-service capabilities. Enterprises want a platform where data providers and consumers can exchange data as a commodity using a common and consistent set of metadata. Doesn’t that sound very similar to the marketplace model?

How does a marketplace make it happen?

To make data sharing an integral part of the culture, the data governance practice of an organization must associate certain measurable KPIs against it. Those KPIs can be met through incentivization schemes. So, the marketplace must have some monetization policy defined for the data, even for internal sharing. (The currency may not always be money. Reward points can also serve the purpose.)

From the technical perspective, a marketplace depends on two capabilities: a strong foundation of metadata and the capability to virtualize or materialize data. The metadata creates a data catalogue similar to the product catalogue in any typical e-commerce platform. This allows data providers to publish their data products to the platform with appropriate levels of detail (including functional and non-functional SLAs), where data consumers can discover them easily. Data virtualization or materialization capabilities also help to reduce the cost of data movement.

Data marketplace vs. e-commerce platform

A data marketplace has a few differences from an e-commerce platform. Most e-commerce platforms are either marketplace-based or inventory-based. But a hybrid approach is essential for a data marketplace. Individual organization units of the enterprise will offer their respective data products. But some data products should be owned and offered at the enterprise level. This is because some of the datasets (e.g., master or reference data) may need to be cleansed, standardized and de-duplicated from multiple sources to offer a single view of truth across the enterprise.

For example, a financial institution may have several lines of business (LoB) such as banking, wealth management, loans and deposit. A single customer may have presence in all four LoBs, causing separate footprints. When the analytics department wants to get a 360-degree view of the customer to run an integrated campaign, customer data is integrated into one place to generate a single value of truth. The marketplace can be this place of consolidation. In these cases, the marketplace may have to maintain its own inventory of data — thereby adopting a hybrid approach.

The second difference between a data marketplace and a typical e-commerce platform is the nature of the product. Unlike any typical product of e-commerce, a data product is non-rival in nature, meaning the same product can be provisioned for multiple consumers. The provisioning of data follows certain data rules as defined in the policy of the concerned dataset. So, data as a service would involve the hidden complexities of creating dynamic subsets (on-the-fly or cached) that are transparent to the consumer.

The third difference is the desired marketplace experience. The consumers of a data marketplace would like to explore the available datasets before procurement. This exploration is much deeper than a “preview” of the product that is typically available on e-commerce platforms. This means the marketplace should integrate with some development environment (such as Jupyter notebook) for better data exploration.

The fourth and final difference, which might be available only in a matured data fabric, is the capability to aggregate the data. Data marketplace consumers should be able to make intuitive queries that can be resolved through a synthesis of multiple data sources. This requires a highly illustrated business and technical metadata which form a knowledge graph to resolve such intuitive or semantic queries.

At present there is no single product in the market that provides all the typical e-commerce platform features and fulfills all of these requirements. But there are players who provide subsets of the features. For example, most market players have improved their capabilities in data cataloging, and there is increased interest on the client side to properly define their enterprise data sets to ease classification, discovery, collaboration, quality management and more. We have seen data cataloging interest grow from 53% to 66% within a year.

The Watson Knowledge Catalog, available within the Cloud Pak for Data suite, is one of the most powerful products in the cataloging space. On the other hand, Snowflake’s Data Marketplace and Exchange and Google’s Dataplex are ahead of the curve in providing access to external data in a pure marketplace model. The data marketplace of today would likely be a combination of many products.

Where can you start?

Data marketplace is likely to go through various maturity cycles within each organization. It can begin as a catalog of data products available from multiple data sources. Then the marketplace owner can create a few foundational capabilities. For example, clients would need business and technical metadata to define, describe, classify and categorize the data products. The users of the catalog would also be able to associate governance policies and rules to control access to the data for the intended recipients, which could be reused when appropriate data provisioning workflow is in place. At a later stage, marketplace features can be added to the catalog to publish internal and external data for the consumer to provision through a self-service channel. Once that is done, the marketplace can further mature to become a full-scale platform that facilitates data exploration, contract negotiation, governance and monitoring.

Source: ibm.com

Saturday 23 July 2022

Customer-driven digital marketing: Focus on measurable dimensions of customer-centricity in a cookie-free world

IBM, IBM Exam, IBM Exam Study, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Material, IBM Tutorial and Material

The Google announcement to eliminate third-party cookies in 2023 is a wake-up call for marketers. But this is not the only initiative that affects ROI and revenue generation through performance marketing campaigns.

Mobile device identifiers, privacy protection regulations and walled gardens will impact marketing campaigns as well. Today, up to 50% of web traffic lacks third-party cookies, yet performance marketing is still going strong. Chrome dominates, but since 2019, Adform has provided first-party ID solutions for performance marketers, allowing the identification of users in Firefox and Safari. ID providers work jointly on use cases with “data clean room” providers. A data clean room is software that enables advertisers and brands to match data on a user level without sharing any personally identifiable information (PII) or raw data with one another. Marketers need to be aware that this impacts marketing performance KPIs.

Create value through customer-driven customization

Primary data has been and always will be the preferred option for marketers, but it’s time to break free of the limited thinking of the past. Efficient and successful marketing campaigns are not limited to newsletters. In fact, newsletter fatigue is omnipresent, and research shows that Gen Z is not interested in this kind of communication. Now is the time to bring the concept of hyper customization to life.

Personalization and customization are often used interchangeably. But personalization relies on data points collected by the company and reused to increase relevancy of ads. In contrast, the customers themselves provide the information for customization. They share their preferences, and marketing campaigns feature corresponding content. Moving into an era of first-party data marketing requires the collection of data and preferences from all channels in one single system. As CMOs face the “cookie challenge” that will impact performance marketing, they must shift focus to true customization.

To create true value through dialogue with customers, CMOs must carefully revisit their customization strategies. The data required for customization comes from multiple sources, including sales. Actual experiences, qualitative and quantitative insights, real time analytics and customer service data are the holy grail.

AI-powered persona-based and account-based algorithms enrich this diverse set of information. Intelligent marketing campaign design and marketing platforms allow for truly customized content that can be automatically created and shared with the customer. This includes re-targeting to close the purchasing process, using reinforcement tools or recommendations that map the actual and behavioral data.

Develop a customization strategy with data and analytics leaders

Getting to this point requires a detailed customization strategy that syncs all touchpoints and marketing campaigns for a unique and compelling customer experience. First-party data is essential to gain an accurate measurement of defined KPIs and campaign performance. According to a 2021 study, 88% of marketers state that they are making collecting first-party data a priority. The required data collecting processes and consent requirements must be in place, as this data will connect automation platforms, advertisers and publishers.

Moving beyond the basics requires a robust data strategy that clearly states what data is captured initially in a customer interaction, as well as what other data is necessary to improve the creation of customized content. Turning to a first-party, data-led, multichannel marketing strategy requires marketers to know how to customize content with the help of marketing technology and innovation. Marketers must create a continuum of feedback and analysis to improve and maximize use of the data to maintain the trust and loyalty of the customer. A positive customer experience today is the most important competitive differentiator.

Successful CMOs work with data and analytics leaders to clarify desired business outcomes, optimal use cases and relevant technology investments. Customers demand transparency about the use of their personal information, and they will grow to expect full control and ownership of their personal data. The data strategy must reflect on self-sovereign identity models and allow users to provide proof of their identity and their claims. Customers will pre-program the permission to use data, including granting usage for analytics. The strategy should contain use cases involving data to increase customization and engagement at every touchpoint to ensure that trust is the guiding principle.

Customers are only willing to share their data with companies that reinvent the customer experience and treat them with respect and fairness. The strategy should focus explicitly on assurances to customers about how their personal data will be used and protected and provide proof through actions. It should explore how data insights can create a competitive advantage, open new market opportunities, impact brand purpose, tie back into the supply chain and impact sustainability objectives.

Source: ibm.com

Tuesday 19 July 2022

IBM

The hidden history of Db2

IBM, IBM Exam Prep, IBM Exam Certification, IBM Skills, IBM Tutorial and Material, IBM Preparation

In today’s world of complex data architectures and emerging technologies, databases can sometimes be undervalued and unrecognized. The fact is that databases are truly the engine driving better outcomes for businesses — they’re running your cloud-native apps, generating returns on your investments in AI, and the backbone supporting your data fabric strategy.

IBM has been the pioneer in paving the way for data management technologies and advancements for decades, from the first commercial database to quantum computing systems. A crucial part of that journey has been the invention of IBM Db2, a database designed to handle billions of transactions per day, powering today’s largest global operations, from banking to sustainable energy. This is the story of how a technology that was once just a theory has turned into a hybrid, multi-cloud data ecosystem running mission critical workloads. This is the story of Db2.

Back in the 1960s and 70s, vast amounts of data were stored in the world’s new mainframe computers—many of them IBM System/360 machines—and had become a problem. They were expensive. An Oxford-educated mathematician working at the IBM San Jose Research Lab, Edgar “Ted” Codd, published a paper in 1970 showing how information stored in large databases could be accessed without knowing how the information was structured or where it resided in the database. This theory turned into how the relational database was born. Until this point, for many years IBM continued to promote its established hierarchical database system, IBM IMS, which was the database that had helped NASA put a man on the moon. IBM had been hesitant to accept the relational database theory. Finally, in 1973, IBM began the System R program in the San Jose Research Laboratory—now Almaden Research Center—to prove the relational theory with what it called “an industrial-strength implementation.” Although IBM isolated Codd from the project, it still produced an extraordinary output of innovations that became the foundation for IBM’s success with relational databases. IBM stars such as Don Chamberlin, Ray Boyce, Patricia Selinger, and Raymond Lorie all contributed to making the relational database, the Db2 we know and love today, a reality. Finally, 13 years after Codd published his paper, IBM Db2 on z/OS was born, and 10 years after that the first IBM Db2 database for LUW was released.

Our DNA of pioneering the relational database system continues to help organizations differentiate in their respective markets and is recognized in Db2 client satisfaction and today’s success stories. From powering the Marriott Bonvoy loyalty program used by 140M+ customers, to enabling AI to assist Via’s riders in 36 million trips per year, Db2 is the tested, resilient, and hybrid database providing the extreme availability, built-in refined security, effortless scalability, and intelligent automation for systems that run the world. Db2’s decades of innovation and expertise running the most demanding transactional, analytical, and operational workloads have culminated today in the 2022 Gartner Peer Insights Customers’ Choice distinction for Cloud Database Management Systems.

When we look ahead, that same architectural foundation we have spent decades perfecting and innovating is also bringing Db2 into future. “If we look at our competitors for example, it demonstrates how prevalent the core Db2 foundations have become in the market. Taking massively parallel processing, for example, MPP is and has always been core to Db2 Warehouse and is more relevant today than ever for data lakehouse architecture and a data fabric. We have 30 years of expertise in this technology that competitors are just getting started in.” – Chief Db2 Architect & Distinguished Engineer, Hebert Pereyra.

Whether you need always-on, mainframe-level availability for cloud-native applications, insanely fast ingest for real-time analytics and ML, or a simplified database ecosystem, Db2 is built to evolve with you. To achieve better outcomes, organizations are prioritizing and addressing the key use cases of databases as shown by Db2:

1. Mission Critical Apps 

Db2 is the always-on database built for the systems that run the world.

◉ Whether in the cloud, hybrid, or on-premises ensure continuous availability, to keep applications and daily operations running smoothly. IBM Db2 pureScale leverages our parallel sysplex architecture, providing mainframe-class availability for your data.

◉ No one knows your data like you do. Let’s keep it that way. Protect your data with in-motion and at-rest encryption, extensive auditing, data masking, row and column access controls, role-based access and more.

◉ Free staffing time for value-added activities with intelligent workload automation and built-in container operators to automate time-consuming database tasks, while keeping your business running.

◉ Exxon transforms customer experiences 

◉ Nedbank builds a scalable data warehouse architecture

2. Real-time analytics and ML

Endless data but your queries aren’t fast enough. Empower real-time decision making and perform heavy computational analysis with built-in ML, insanely fast ingest, and querying of data in motion and at rest.

◉ Real-time warehousing with continual data ingestion, so analysts can enjoy low-latency analytics

◉ Perform heavy-computational analysis and machine learning all within the Db2 database

◉ Best-in-class massively parallel processing (MPP) to help you scale out and scale up capabilities as analytical workload demand grows

3. Anywhere deployment 

You need a database you can deploy in the cloud of your choice, on premises and in a hybrid environment. Deploy a unified enterprise data platform that runs anywhere with Db2.

An integrated multicloud data platform

4. Performance at scale

Scale Db2 up and out as your workloads evolve and your performance needs change. Db2 pureScale’s shared data cluster scale out allows for independent scale of compute and storage, enabling high performance, low-latency transactions.

Marriott improves performance by 90% in the cloud

5. Data security & governance

Take control of your data governance, security and compliance with Db2’s comprehensive, built-in auditing, access control, and data visibility capabilities.

Vektis improves healthcare quality through data

6. Database complexity, simplified

  Store and query more than just traditional structured data with multi-model capabilities. Seamlessly integrate Db2 with your existing data lake to easily query datasets residing in open data formats like Parquet, Avro and more.

Active international unlocks USD 80 million in year-one estimated saving by enabling media optimization

Source: ibm.com

Sunday 17 July 2022

Customer-driven digital marketing: Save on costs through operational efficiency in marketing

IBM, IBM Exam, IBM Study, IBM Career, IBM Tutorial and Materials, IBM Jobs, IBM Skills, IBM Tutorial and Material

Even when most parts of a business (such as production or accounting) are already digitalized, marketing often remains more art than science. While the merits of creativity in marketing remain undoubted, efficiency plays a vital role in the complex orchestration of marketing tools and assets.

Marketing organizations must explore the processes and procedures enabled by digitalization. They must revisit and question every aspect of their operation to determine if there are ways to leverage technology. CMOs can gain efficiency by examining the technical resources and the skill sets available within the team. Are there sufficient digital-savvy team resources? Are all team members enabled to collaborate digitally — beyond Slack or Teams? Are they managing their projects online with transparency and insight into correlations with other colleagues? While that should be a given by now, most departments manage projects in an analog style, leading to siloed knowledge, bumpy collaboration and increased agency costs.

Optimize agency workflow with clear expectations and accountability

CMOs should bring the same level of analysis to their collaborations with agencies: Do each of your agencies know their exact role and responsibility in a project? Do they navigate the intersections with departments and other agencies to seamlessly execute on deliverables? A lack of clarity often leads to misunderstandings, specifically when delivering campaign assets. To maximize efficiency across channels, set specific expectations from the beginning, and communicate them directly with all parties.

Consider when to bring the media agency on, when to share the production plan, and how to align on timing. Decide when creative assets should go directly to the provider and when they should go to the media agency for upload. And taking this further, make a concrete plan for managing innovation projects. Identify who has the right technical expertise and who will evaluate new formats and technologies. Thoroughly map the process from initiation to completion and consider the intersections along the way.

These clarified processes, combined with embedded collaboration tools, provide a transparency that leads to significant savings in agency fees. The ability to track change requests, feedback and executed changes allows CMOs to identify issues in the workflow and correct them immediately. With these changes, I personally achieved 18% savings in my annual agency fees in a short space of time.

Standard asset templates, consistent cost savings

CMOs should also standardize the creation of marketing assets that typically are created for each campaign from scratch. Pre-configured templates, containers for landing pages, and dynamic creative optimization (DCO) banners allow for real-time updates. Marketers can launch a campaign or a promotion (or adjust a running campaign if the KPIs are not met) in a matter of hours versus days or weeks. The IT department will appreciate the reduced maintenance challenges of this streamlined approach to online assets.

As marketing technology evolves, it creates a demand for flexible teams with state-of-the-art diverse expertise. CMOs can overcome the challenge of filling skill gaps within teams by establishing systems that maximize the efficiency of agency resources. Our IBM analysis shows that on average, our clients achieve a 275% increase in project capacity, 50% increase in speed to completion of work, 98% reduction in approval time, over 15% savings in agency fees and 10% savings in campaigns. Digitalization allows these marketing departments to harness the kind of savings and freed-up resources already enjoyed by other departments.

Source: ibm.com

Saturday 16 July 2022

Do you know your data’s complete story?

IBM Exam Study, IBM Career, IBM Tutorial and Materials, IBM Skills, IBM Jobs, IBM Preparation, IBM Learning, IBM Guide

Data is everywhere in a hybrid and multi-cloud world. Enterprises now have more data, more data tools, and more people involved in data consumption. This data proliferation has made it harder than ever before to trust your data: knowing where it came from, how it has changed, and who is using it. Data provenance is a complexity facing many clients engaged in data governance-related use cases. To help our clients overcome these challenges, we are pleased to announce our collaboration with MANTA to bring MANTA Automated Data Lineage for IBM Cloud Pak for Data to market.

MANTA Automated Data Lineage for IBM Cloud Pak for Data is a deep integration between MANTA’s end-to-end Data Lineage platform and Watson Knowledge Catalog on IBM Cloud Pak for Data. Data Lineage is an essential capability for modern data management and is a required aspect of regulatory compliance for many industries. Together with Watson Knowledge Catalog’s business friendly native data lineage, MANTA provides the most complete picture of technical, historical, and indirect data lineage.

How MANTA makes a difference

MANTA helps ease the amount of manual effort necessary for robust data lineage by providing scanners for the automated discovery of data flows in 3rd party tools such as Power BI, Tableau, and Snowflake. This information is then automatically scanned into Watson Knowledge Catalog’s Data Lineage UI and becomes available to view alongside the data quality, business terms, and other metadata previously available to Watson Knowledge Catalog users.

In addition to supporting the high-level summary view appropriate for many business users, clients can also dig deeper to see additional technical, historical, and indirect data lineage within MANTA’s Lineage Flow UI. Collectively this means that the addition of MANTA will provide quicker time to value not only through the automation of previously manual processes, but also through the ability to more rapidly answer questions about whether certain data is trustworthy.

A boost to your data fabric architecture

MANTA Automated Data Lineage for IBM Cloud Pak for Data will be available as an add-on to Watson Knowledge Catalog, further improving the ability of IBM’s data fabric solution to satisfy governance and privacy use cases. Surrounded by existing capabilities like consistent cataloging, automated metadata generation, automated governance, reporting and auditing assistance, MANTA Automated Data Lineage for IBM Cloud Pak for Data will help to bolster the data governance capability of IBM’s data fabric solution.

Of course, trust in data is important across every data fabric use case whether it happens to be building 360-degree views of customers or enabling trustworthy AI use cases. The multi-cloud data integration within the data fabric also helps connect the various data sources MANTA will be scanning. MANTA will simultaneously benefit and be benefited by the multiple data fabric entry points, that help customers on their data management strategy

What’s next?

The partnership with MANTA is just the beginning; we will continue to work closely to add more capabilities to MANTA Automated Data Lineage for IBM Cloud Pak for Data.

Source: ibm.com

Thursday 14 July 2022

Banks are losing money on new payment systems

IBM, IBM Exam, IBM Exam Prep, IBM Exam Tutorial and Materials, IBM Career, IBM Jobs, IBM Skills, IBM News, IBM Certifications

Payments modernization reminds me of bathing toddlers. It could sometimes be quite a project at our house, when our boys were toddlers, with splashing, shouting, and arguments over bath toys. So why bathe both at once? Because the alternative is even more work, especially if you are the only parent available. The challenge of bathing two toddlers together is nothing next to the challenge of bathing one toddler while chasing the other around the house — twice.

Read More: C1000-059: IBM AI Enterprise Workflow V1 Data Science Specialist

Many financial institutions (FIs) are in the throes of modernizing their own payments infrastructures individually, and they are each figuring out how to develop and deploy systems that all do essentially the same thing. This is like trying to bathe two children separately, using the same resources each time, and missing the efficiency of accomplishing both jobs at once.

Where banks lose money on payment modernization

In some jurisdictions, FIs have collaborated to establish a common set of standards and processes for payment market infrastructures, such as NPP in Australia, Lynx in Canada, or TCH RTP in the United States, to name a few. Helping establish a new payments market infrastructure is only one aspect. For every new market infrastructure or change in payment message format, each FI still needs to build their own capabilities to connect to those new systems. As the entry points and gatekeepers for their end users into those new market infrastructures, does it really make sense for each FI to tackle the same problem separately in their own shops using essentially the same tools?

More specifically, the challenge is true for the payment technologies each FI uses in their middle and back-end layers for validating, processing, clearing, and settling payments. The costs involved in these developments, for a new rail or even just a new messaging standard such as the SWIFT MX standard, can be quite high. For many there is simply no business case that supports the necessary changes. There is little new, incremental revenue to be gained from developing a new payment system that will simply see existing volumes shift from one rail or format to another. The incentive for most financial institutions is that if they don’t modernize and their competitors do, they may lose customers to the competition, thus losing both fee revenue and the deposit balances that support those customers’ payments. Those deposit balances are what banks chase for their fundamental business of lending.

In some cases, when customers adopt a new payment rail, an FI may see lower revenue from fees than what they earned with an older payment method. Consider a business accepting real-time or near-real-time payments such as Interac e-Transfer, TCH RTP, Zelle, Faster Payments, etc. Those payments may have previously been made by credit card, a more lucrative form of payment for issuing FIs. In many jurisdictions, the FIs have offered these newer payment types for low or no fees, due to competitive pressures. The FI pays a high cost to build and maintain systems just to keep the client business they already have. It may even lose revenue, while tying up resources in the deployment process with essentially no return on the investment. The FI also faces the ongoing cost of maintaining and upgrading those systems over time. To invest in new systems at a high cost while forgoing revenue is a lose-lose proposition.

How banks can save costs and retain customers through payment modernization

The differentiating benefits of modernized payment systems for FIs and their customers are not found in the “back-office” processing, clearing, and settlement systems. They are found in the front-end features and functions provided to the customers, including retail, business and government clients, who initiate and receive payments. Those are what attract and retain customers. It simply makes economic sense to turn to a cloud-based payments-as-a-service, pay-as-you-go model to fulfill an FI’s back-end processing and operational needs, while spending more time and money on the front-end: delivering value-added services to their customers. Since some cloud-based payment services already exist, and are, in some cases, used by more than one FI, what’s left for the FI is the front-end and integration costs for the new system – costs they would have had anyway.

In a recent survey of 300 financial institution IT and operations executives from around the world, 84% said that their IT environment has changed more in the last 12 months than in the company’s lifespan. Moreover, 88% of those surveyed stated that short-term thinking has IT and operations teams choosing options of lower quality, partly hampered by inadequate budgets, resulting in poor system resiliency.

It’s becoming clear that financial institutions need to actively consider new models for payments that don’t extend or exacerbate their existing IT challenges — or introduce new ones — due to short-term thinking. Many other industries have shifted to cloud-based, as-a-service models that have helped them advance their interests and provide better value to their investors and customers. These models are used by multiple organizations, allowing them to share the same resources at a lower cost. It’s time financial institutions did the same with their payment systems.

Source: ibm.com

Tuesday 12 July 2022

Don’t let your data pipeline slow to a trickle of low-quality data

IBM Exam, IBM Exam Prep, IBM Career, IBM Jobs, IBM Learning, IBM Tutorial and Material, IBM Study, IBM Certification, IBM Preparation

Businesses of all sizes, in all industries are facing a data quality problem. 73% of business executives are unhappy with data quality and 61% of organizations are unable to harness data to create a sustained competitive advantage1. With the average cost of bad data reaching $15M, ignoring the problem is a significant pitfall.

To help companies avoid that pitfall, IBM has recently announced the acquisition of Databand.ai, a leading provider of data observability solutions. Data observability takes traditional data operations to the next level by using historical trends to compute statistics about data workloads and data pipelines directly at the source, determining if they are working, and pinpointing where any problems may exist. 

The data observability difference 

With traditional approaches, data issues are reported by data users as they try to access and use the data and may take weeks to fix, if they’re found at all. Instead, Databand.ai starts at the data source, collecting data pipeline metadata across key solutions in the modern data stack like Airflow, dbt, Databricks and many more. It then builds historical baselines on data pipeline behavior so it can detect and alert on anomalies while the data pipelines run. Automated resolution of anomalies through workflows is then enacted to trigger changes without impacting delivery SLAs. 

Catching data quality problems at the source helps enable the delivery of more reliable data. Mean time to discovery (MTTD) is improved as issues are detected in real time and pipelines execute instead of reacting afterward. Moreover, mean time to repair (MTTR) is also improved as contextual metadata helps data engineers focus on the source of the problem, rather than debugging where the problem stems from. In this way, monitoring both static and in motion pipelines while delivering high quality metadata enables a faster time to value than would otherwise be possible. 

Data observability as part of a data fabric 

Databand.ai will be available to IBM clients through our data fabric architecture. While the data observability capability may be utilized independently, we recommend leveraging a more complete data fabric architecture in conjunction with it to help automate the data lifecycle. In addition to data observability, IBM clients can take advantage of use cases such as multicloud data integration, data governance and privacy, customer 360, and MLOps and trustworthy AI. Data observability will also integrate with these other use cases for improved results where both are applied. For example, multicloud data integration benefits from the ability to resolve data anomalies in real time so that the delivery of reliable data isn’t interrupted, no matter where it resides. Data governance & privacy is helped by having more reliable data with constant management instead of a snapshot approach. Customer 360 is improved when fewer data quality issues make their way to applications and skew customer views. And MLOps and trustworthy AI benefit from the more holistic look at the lifecycle from data to implementation of AI. The net result is a virtuous cycle where each data fabric use case strengthens the other. 

The acquisition of Databand.ai expands IBM’s end-to-end enterprise observability capability. Databand.ai will be a core component of observability use case alongside IBM Observability by Instana APM and Watson® Studio on IBM Cloud Pak® for Data. Instana delivers end-to-end observability across applications, data and machine learning; Databand.ai provides data observability for dynamic and static pipelines; and Watson Studio provides model observability for reliable, trusted AI across its lifecycle. In this way, they collectively deliver an end-to-end enterprise observability and reliability solution. 

Looking toward the future

We’re pleased to be bringing Databand.ai into our suite of data & AI solutions. Not only does it signify the continued evolution and improvement of our data fabric approach, but it also brings additional value to clients across the data lifecycle from end to end. 

Source: ibm.com

Thursday 7 July 2022

Customer-driven digital marketing: Marketing has become one of the key drivers for enterprise growth

IBM, IBM Exam Study, IBM Certification, IBM Prep, IBM Preparation, IBM Tutorial and Material, IBM Career, IBM Skills, IBM Jobs, IBM Learning

According to a 2019 McKinsey report, 83% of CEOs say marketing has become one of the key drivers for enterprise growth. But how does it work in reality? On an operational level, people still believe marketing is the first budget to be sacrificed when profitability slumps. The marketing budget is often reduced to improve the firm’s overall financial results in the short term.

50% of marketers say they struggle to demonstrate their quantitative impact according to a 2020 Gartner Marketing Data and Analytics Report. Having held multiple senior marketing leadership positions myself, I have firsthand experience with this struggle.

Read More: C1000-058: IBM MQ V9.1 System Administration

I am thrilled that nowadays marketers have strong, reliable data to prove their value and impact. They can specify how actual revenues are driven through marketing campaigns and calculate the respective ROI of specific channels. By taking advantage of real-time data, augmented through artificial intelligence, marketers can adjust campaign, advertisement and agency spending in real time.

Investments in innovative technology drive enormous efficiencies. But how many marketing departments are following the call to digitize their processes?

To be a true partner for the CEO, marketers need to dive deeper into numbers. To achieve this, here are three areas that provide CMOs unparalleled opportunities to make their case, crystallizing the value that marketing contributes to manifest significant revenue growth for B2B and B2C:

◉ Harvest significant cost savings through operational efficiency in marketing

◉ Focus on measurable dimensions of customer centricity in a cookie-free world

◉ Generate incremental revenues through real-time AI-driven analytics and campaign steering

Harvest significant cost savings through operational efficiency in marketing

While the merits of creativity in marketing remain undoubted, efficiency plays a vital role in the complex orchestration of marketing tools and assets. Marketing organizations can gain efficiency by questioning processes to find more ways to leverage technology.

Having an online collaboration tool embedded within your digital asset management software provides the backbone of the workflow, providing the ability to track each change request, each piece of feedback, and each change executed along the complete workflow. This transparency leads to significant cost savings in agency fees.

CMOs should also consider evaluating the opportunities to industrialize the creation of standard marketing assets such as landing pages and banners. Pre-configured templates enable marketers to launch a campaign or a promotion in a matter of hours versus days or weeks.

Focus on measurable dimensions of customer centricity in a cookie-free world

As the focus shifts from third-party cookies towards primary data, marketing requires the collection of data and preferences from all channels in one single system. Turning to a first-party, data-led multichannel marketing strategy requires marketers to customize content with the help of marketing technology and innovation. Marketers need to create a continuum of feedback and analysis to improve and maximize use of valuable data from multiple sources, including sales, actual experiences, real-time analytics and customer service data.

To leverage this set of diverse information for marketing campaigns, you must enrich it with attributes related to algorithms and powered by artificial intelligence. Intelligent marketing campaign design and marketing platforms allow truly customized content to be automatically created and shared with the customer via the preferred channel of communication.

Generate incremental revenues through real-time AI-driven analytics and campaign steering

Only 54% of marketing decisions are influenced by analytics, and CMOs are often slow to adapt to data-driven decisions and recommendations, according to a 2021 Forrester Global Marketing Study. Many marketing departments still need days or weeks to compile reliable data. The shift to relying on a combination of data from multiple sources, along with the improvement of cross-channel attribution, is paramount to fully understanding customers and the market.

A good system constantly monitors the results based on classic marketing KPIs, ROI and revenues. Underlying negative trends can be identified before they have an impact on marketing campaigns, revenues or business in general. This leads to a more objective, predictive and proactive approach to data discovery, automatically identifying patterns and trends that humans may never uncover. It can unveil bias within data sets stemming from unconscious human preconceptions or flawed data collection techniques, helping to avoid a negative performance impact.

By mirroring the customer journey, attribution modeling reveals which parts of the experience customers prefer and which parts need to be enhanced.

Conclusion

Though CMOs face several challenges, they are now in a unique position to manage marketing like a business operation. Digitalization allows them to harvest significant cost savings by implementing efficient and seamless processes within the marketing team. Additionally, partnering with creative and media agencies frees up resources and allows for significant cost savings. Addressing the skill gap unlocks the full potential of data and technology.

Aligning all campaign data and introducing real time analytics unleashes incremental revenues. If the data and customization strategy is in place and aligned with the business objectives, CMOs can set their companies apart from competitors, increase NPSs significantly, and create stable brand loyalty. Now is the time that CMOs, along with their teams and partners, can respond to the CEO’s call to identify new areas of enterprise growth.

Source: ibm.com

Tuesday 5 July 2022

5 recommendations to get your data strategy right

IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills IBM Jobs, IBM Guides, IBM Learning, IBM News, IBM Certification, IBM Tutorial and Materials

The rise of data strategy

There’s a renewed interest in reflecting on what can and should be done with data, how to accomplish those goals and how to check for data strategy alignment with business objectives. The amazing evolution of technology, cloud and analytics—and what it means for data use — changes quickly, which makes it easy to fall behind if your data strategy and related processes aren’t frequently revisited.

Read More: C2090-623: IBM Cognos Analytics Administrator V11

From multicloud and multidata to multiprocess and multitechnology, we live in a multi-everything landscape. Luckily, today’s data management approaches aren’t limited by traditional constraints like location or data patterns. The right data strategy and architecture allows users to access different types of data in different places — on-premises, on any public cloud or at the edge — in a self-service manner. With technologies like machine learning, artificial intelligence or IoT, the resulting insights are more sophisticated and valuable, especially when woven into your organization’s processes and workflows.

The evolution of a multi-everything landscape, and what that means for data strategy

As ecosystems transformed over the last few years and simultaneously increased the opportunities to improve results driven by data, a few main contributing factors drove major change in how you should think about your data strategy:

◉ The reality of hybrid multicloud and its accelerated adoption has created new possibilities and challenges. According to a recent Institute for Business Value (IBV) study, 97% of enterprises have either piloted, implemented or integrated cloud into their operations. But not all data is best suited for the cloud. While the share of IT spend dedicated to public cloud is expected to decline by 4% between 2020 and 2023, hybrid and multicloud spend is expected to increase up to 17%. Moving, managing and integrating data in a hybrid multicloud ecosystem requires the right data strategy, design and governance to eliminate silos and streamline data access.

◉ The diversity of data types, data processing, integration and consumption patterns used by organizations has grown exponentially. These data types require open platforms and flexible data architectures to ensure consistency with an appropriate orchestration across environments and strong re-approach of traditional capabilities and skillsets.

◉ The business areas need more value, faster — it’s a fact that the multi-everything landscape has triggered a more demanding world. Competition plays harder, and every day, new business models and alternatives driven by data and digitalization surface in almost every industry. Lines of business have increased pressure to speed go-to-market of innovation through new data-driven solutions, products or businesses. IT works to manage the underlying risk, security and performance through governance, without limiting flexibility. This balance between innovation and governance leads to new ways of working, like how the portability of data and analytics solutions has become a way to anticipate and adapt to change by enabling high flexibility to run in different environments and avoid vendor lock-in.

5 recommendations for a data strategy in the new multi-everything landscape

When it comes to getting a data strategy right, I like to apply some of the basic principles of a successful business model — scalability, cost-effectiveness and flexibility for change — and extend these concepts to technology, processes and organization. Organizations with data strategies that lack these factors often capture only a small percentage of the potential value of their data and can even increase costs without significant benefits.

In addition to the traditional data strategy considerations, such as recognizing data as a corporate asset or shifting to a data-driven culture with multi-functional teams, here are five recommendations for a data strategy that takes advantage of the multi-everything landscape:

1. Give data assets and accelerators top priority: Develop a process and culture around data that enables true standardization, re-use, portability, speed to action and risk reduction across the end-to-end data lifecycle. From the inception of use cases through the development, deployment, operation and scale of your assets, your data strategy should be supported by the right technologies and platform to enable fully operational and scalable solutions.

2. Establish a true enterprise-centric operational model: It’s critical to have the right operational model that’s fully aligned with the organization’s business objectives and its partnership ecosystem. This requires a deep understanding of the organization’s strengths and weaknesses. Embrace best practices but run away from pure academic approaches. Think big, but prioritize and articulate realistic and actionable plans, establishing the right partnership models along the way. That said, adopt and extend agile techniques as soon as you can.

3. Revisit the extent and approach for data governance: In this multi-everything landscape, data governance functions, processes and technologies should be constantly revisited to manage data quality, metadata, data cataloging, self-service data access, security and compliance across your enterprise-wide data and analytics lifecycle. Extend data governance to foster trust in your data by creating transparency, eliminating bias and ensuring explainability for data and insights fueled by machine learning and AI.

4. Don’t lose the basics: To improve business results, leverage data in a sustainable way and prioritize projects that are scalable, cost-effective, adaptable and repeatable to deliver both near- and long-term results. In all cases, the data strategy should be tightly aligned with your business objectives and strategy and built upon a solid and governed data architecture. It may be tempting to jump quickly into advanced analytics and AI use cases with the promise of astounding results without having considered every implication in the equation, but remember there is no AI without IA.

5. “Show and tell”: Take advantage of proven experiences, new technologies and existing assets as much as possible, and don’t forget to show results quickly. With the capabilities offered by hybrid multicloud environments and innovative co-creation and acceleration methods like the IBM Garage, you have the tools to design, implement and evolve your data strategy to continuously deliver on business outcomes. By showing tangible outcomes, fostering adoption and operationalizing at scale, you can reduce risk and accelerate the journey to a long-lasting, data-driven culture.

While the core principles of a data strategy remain the same, the ‘how’ has dramatically changed in the new data and analytics landscape, and the most successful organizations are the most adaptable to change when revisiting the data strategies. Today approaches and architectural patterns like data fabric and data mesh play an increasingly relevant role through enabling technologies and platforms like hybrid multicloud. As you look ahead, review your data strategy based on the opportunities presented in the new multi-everything landscape, and get ready for change.

Source: ibm.com