Tuesday, 28 February 2023

How to use Netezza Performance Server query data in Amazon Simple Storage Service (S3)

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides

In this example, we will demonstrate using current data within a Netezza Performance Server as a Service (NPSaaS) table combined with historical data in Parquet files to determine if flight delays have increased in 2022 due to the impact of the COVID-19 pandemic on the airline travel industry. This demonstration illustrates how Netezza Performance Server (NPS) can be extended to access data stored externally in cloud object storage (Parquet format files).

Background on the Netezza Performance Server capability demo


Netezza Performance Server (NPS) has recently added the ability to access Parquet files by defining a Parquet file as an external table in the database. This allows data that exists in cloud object storage to be easily combined with existing data warehouse data without data movement. The advantage to NPS clients is that they can store infrequently used data in a cost-effective manner without having to move that data into a physical data warehouse table.

To make it easy for clients to understand how to utilize this capability within NPS, a demonstration was created that uses flight delay data for all commercial flights from United States airports that was collected by the United States Department of Transportation (Bureau of Transportation Statistics). This data will be analyzed using Netezza SQL and Python code to determine if the flight delays for the first half of 2022 have increased over flight delays compared to earlier periods of time within the current data (January 2019 – December 2021).

This demonstration then compares the current flight delay data (January 2019 – June 2022) with historical flight delay data (June 2003 – December 2018) to understand if the flight delays experienced in 2022 are occurring with more frequency or simply following a historical pattern.

For this data scenario, the current flight delay data (2019 – 2022) is contained in a regular, internal NPS database table residing in an NPS as a Service (NPSaaS) instance within the U.S. East2 region of the Microsoft Azure cloud and the historical data (2003 – 2018) is contained in an external Parquet format file that resides on the Amazon Web Services (AWS) cloud within S3 (Simple Storage Service) storage.

All SQL and Python code is executed against the NPS database using Jupyter notebooks, which capture query output and graphing of results during the analysis phase of the demonstration. The external table capability of NPS makes it transparent to a client that some of the data resides externally to the data warehouse. This provides a cost-effective data analysis solution for clients that have frequently accessed data that they wish to combine with older, less frequently accessed data. It also allows clients to store their different data collections using the most economical storage based on the frequency of data access, instead of storing all data using high-cost data warehouse storage.

Prerequisites for the demo


The data set used in this example is a publicly available data set that is available from the United States Department of Transportation, Bureau of Transportation Statistics website at this URL: https://www.transtats.bts.gov/ot_delay/ot_delaycause1.asp?qv52ynB=qn6n&20=E

Using the default settings will return the most recent flight delay data for the last month of data available (for example, in late November 2022, the most recent data available was for August 2022). Any data from June 2003 up until the most recent month of data available can be selected.

The data definition


For this demonstration of NPS external tables capabilities to access AWS S3 data, the following tables were created in the NPS database.

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
Figure 1 – NPS database table definitions

The primary tables that will be used in the analysis portion of the demonstration are the AIRLINE_DELAY_CAUSE_CURRENT table (2019 – June 2022 data) and the AIRLINE_DELAY_CAUSE_HISTORY (2003 – 2018 data) external table (Parquet file). The historical data is placed in a single Parquet file to improve query performance versus having to join sixteen external tables in a single query.

The following diagram shows the data flows:

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
Figure 2 – Data flow for data analysis

Brief description of the flight delay data


Before the actual data analysis is discussed, it is important to understand the data columns tracked within the flight delay information and what the columns represent.

A flight is not counted as a delayed flight unless the delay is over 15 minutes from the original departure time.

There are five types of delays that are reported by the airlines participating in flight delay tracking:

◉ Air Carrier – the reason for the flight delay was within the airline’s control such as maintenance or flight crew issues, aircraft cleaning, baggage loading, fueling, and related issues.

◉ Extreme Weather – the flight delay was caused by extreme weather factors such as a blizzard, hurricane, or tornado.

◉ National Aviation System (NAS) – delays attributed to the national aviation system which covers a broad set of conditions such as non-extreme weather, airport operations, heavy traffic volumes, and air traffic control.

◉ Late arriving aircraft – a previous flight using the same aircraft arrived late, causing the present flight to depart late.

◉ Security – delays caused by an evacuation of a terminal or concourse, reboarding of an aircraft due to a security breach, inoperative screening equipment, and/or long lines more than 29 minutes in screening areas.

Since a flight delay can result from more than one of the five reasons for the delay, the delays are captured using several different columns of information. The first column, ARR_DELAY15 contains the number of minutes of the flight delay. There are five columns that correspond to the flight delay types: CARRIER_CT, WEATHER_CT, NAS_CT, SECURITY_CT, and LATE_AIRCRAFT_CT. The sum of these five columns will equal the time listed in the ARR_DELAY15 column.

Because multiple factors can contribute to a flight delay, the individual components of the flight delay can indicate a fractional portion of the overall flight delay. For example, the overall delay of 4.00 (ARR_DELAY15) is comprised of 2.67 for CARRIER_CT and 1.33 for LATE_AIRCRAFT_CT to equal the total 4.00 flight delay. This allows for further analysis to understand all factors that contributed to the overall flight delay time.

Here is an excerpt of the flight delay data to illustrate how the ARR_DELAY15 and flight delay reason columns interact:

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
Figure 3 – Portion of the flight delay data highlighting the column relationships

Flight delay data analysis


In this final section, the actual data analysis and results of the flight delay data analysis will be highlighted.

After the flight delay tables and external files (Parquet format files) were created and data loaded, there were several queries executed to validate that the data was for the correct date range within each table and that valid data was loaded into all the tables (internal and external).

Once this data validation and table verification was complete, the data analysis of the flight delay data began.

The initial data analysis was performed on the data in the internal NPS database table to look at the current flight delay data (2019 – June 2022) using this query.

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
Figure 4 – Initial analysis on current flight delay data

The data was displayed using a bar graph as well to make it easier to understand.

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
Figure 5 – Bar graph of current flight delay data (2019 – June 2022)

In looking at this graph, it appears that 2022 has fewer flight delays than the other recent years of flight delay data, with the exception of 2020 (the height of the COVID-19 pandemic). However, the flight delay data for 2022 is for six months only (January – June) versus the 12-months of data for the years 2019 through 2021. Therefore, the data must be normalized to provide a true comparison of flight delays between 2019 through 2021 and the partial year’s data of 2022.

After the data is normalized by comparing the number of flight delays compared to the total number of flights, the data can provide a valid comparison from the 2019 through the June 2022 time-period.

Figure 6 – There is a higher ratio of delayed flights in 2022 than in the period from 2019 – 2021

As Figure 6 highlights, when looking at the number of delayed flights compared to the total flights for the period, the flight delays in 2022 have increased over the prior years (2019 – 2021).

The next step in the analysis is to look at the historical flight delay data (2003 – 2018) to determine if the 2022 flight delays follow a historical pattern or if the flight delays have increased in 2022 due to the results of the pandemic period (airport staffing shortages, pilot shortages, and related factors).

Here is the initial query result on the historical flight delay data using a line graph output.

Figure 7 – Initial query using the historical data (2003 – 2018)

Figure 8 – Flight delays increased early in the historical years

After looking at the historical flight delay data from 2003–2018 at a high level, it was determined that the historical data should be separated into two separate time periods: 2003–2012 and 2013–2018. This separation was determined by analyzing the flight delays for each month of the year (January through December) and comparing the data for each of the historical years of data (2003–2018). With this flight delay comparison, the period from 2013–2018 had fewer flight delays for each month than the flight delay data for the period from 2003–2012.

The result of this query was output in a bar graph format to highlight the lower number of flight delays for the years from 2013–2018.

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
Figure 9 – Flight delays were lower during 2013 through 2018

The final analysis combines the historical flight delay data and illustrates the benefit of combining data from external AWS S3 parquet format and local Netezza format do a monthly analysis of the 2022 flight delay data (local Netezza) and graph it alongside the two historical periods (parquet): 2003–2012 and 2013–2018.

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
Figure 10 – The query to calculate monthly flight delays for 2022

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
Figure 11 – Flight delay comparison of 2022 (red) with historical period #1 (2003-2012) (blue) and historical period #2 (2013-2018) (green)

As the flight delay data graph indicates, the flight delays for 2022 are higher for every month from January through June (remember, the 2022 flight delay data is only through June) than the historical period #2 from 2013–2018. Only the oldest historical data (2003–2012) had flight delays comparable to 2022. Since the earlier analysis of current data (2019–June 2022) showed that 2022 had more flight delays than the period from 2019 through 2021, flight delays have increased in 2022 versus the last 10 years of flight delay data. This seems to indicate that the cause of the increased flight delays are factors related to the COVID-19 pandemic impacts to the airline industry.

A solution for quicker data analysis


The capabilities of NPS along with the ability to perform data analysis using Jupyter notebooks and integration with IBM Watson Studio as part of Cloud Pak for Data as a Service (with a free tier of usage) allow clients to perform data analysis quickly on a data set that can span the data warehouse and external Parquet format files in the cloud. This combination provides clients flexibility and cost savings by allowing them to host data in a storage medium based on application performance requirements, frequency of data access required, and budgetary constraints. By not requiring a client to move their data into the data warehouse, NPS can provide an advantage over other vendors such as Snowflake.

Supplemental section with additional details


Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
The SQL used to create the native Netezza table with current data (2019-June 2022)

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
The SQL to define a database source in Netezza for the cloud object storage bucket

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
The SQL to create external table for 2003 through 2018 from parquet files

Amazon Simple Storage Service (S3), IBM, IBM Exam, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Materials, IBM Guides
The SQL to ‘create table as select’ from the parquet file

Source: ibm.com

Saturday, 25 February 2023

5 misconceptions about cloud data warehouses

Modernize: Cloud-ready data, Collect: Make data accessible, Organize: Business-ready analytics, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Tutorial and Materials, IBM Jobs, IBM Prep, IBM Preparation

In today’s world, data warehouses are a critical component of any organization’s technology ecosystem. They provide the backbone for a range of use cases such as business intelligence (BI) reporting, dashboarding, and machine-learning (ML)-based predictive analytics, that enable faster decision making and insights.

The rise of cloud has allowed data warehouses to provide new capabilities such as cost-effective data storage at petabyte scale, highly scalable compute and storage, pay-as-you-go pricing and fully managed service delivery. Companies are shifting their investments to cloud software and reducing their spend on legacy infrastructure. In 2021, cloud databases accounted for 85% of the market growth in databases. These developments have accelerated the adoption of hybrid-cloud data warehousing; industry analysts estimate that almost 50% of enterprise data has been moved to the cloud.

What is holding back the other 50% of datasets on-premises? Based on our experience speaking with CTOs and IT leaders in large enterprises, we have identified the most common misconceptions about cloud data warehouses that cause companies to hesitate to move to the cloud.

Misconception 1: Cloud data warehouses are more expensive


When considering moving data warehouses from on-premises to the cloud, companies often get sticker shock at the total cost of ownership. However, a more detailed analysis is needed to make an informed decision. Traditional on-premises warehouses require a significant initial capital investment and ongoing support fees, as well as additional expenses for managing the enterprise infrastructure. In contrast, cloud data warehouses may have a higher annual subscription fee, but they incorporate the upfront investment and additional ongoing overhead. Cloud warehouses also provide customers with elastic scalability, cheaper storage, savings on maintenance and upgrade costs, and cost transparency, which allows customers to have greater control over their warehousing costs. Industry analysts estimate that organizations that implement best practices around cloud cost controls and cloud migration see an average savings of 21% when using a public cloud and a 13x revenue growth rate for adopters of hybrid-cloud through end-to-end reinvention.

Misconception 2: Cloud data warehouses do not provide the same level of security and compliance as on-premises warehouses


Companies in highly regulated industries such as finance, insurance, transportation and manufacturing have a complex set of compliance requirements for their data, often leading to an additional layer of complexity when it comes to migrating data to the cloud. In addition, companies have complex data security requirements. However, over the past decade, a vast array of compliance and security standards, such as SOC2, PCI, HIPAA, and GDPR, have been introduced, and met by cloud providers. The rise of sovereign clouds and industry specific clouds are addressing the concerns of governmental and industry specific regulatory requirements. In addition, warehouse providers take on the responsibility of patching and securing the cloud data warehouse, to ensure that business users stay compliant with the regulations as they evolve.

Misconception 3: All data warehouse migrations are the same, irrespective of vendors


While migrating to the cloud, CTOs often feel the need to revamp and “modernize” their entire technology stack – including moving to a new cloud data warehouse vendor. However, a successful migration usually requires multiple rounds of data replication, query optimization, application re-architecture and retraining of DBAs and architects.

To mitigate these complexities, organizations should evaluate whether a hybrid-cloud version of their existing data warehouse vendor can satisfy their use cases, before considering a move to a different platform. This approach has several benefits, such as streamlined migration of data from on-premises to the cloud, reduced query tuning requirements and continuity in SRE tooling, automations, and personnel. It also enables organizations to create a decentralized hybrid-cloud data architecture where workloads can be distributed across on-prem and cloud.

Misconception 4: Migration to cloud data warehouses needs to be 0% or 100%


Companies undergoing cloud migrations often feel pressure to migrate everything to the cloud to justify the investment of the migration. However, different workloads may be better suited for different deployment environments. With a hybrid-cloud approach to data management, companies can choose where to run specific workloads, while maintaining control over costs and workload management. It allows companies to take advantage of the benefits of the cloud, such as scale and elasticity, while also retaining the control and security of sensitive workloads in-house. For example, Marriott International built a decentralized hybrid-cloud data architecture while migrating from their legacy analytics appliances, and saw a nearly 90% increase in performance. This enabled data-driven analytics at scale across the organization.

Misconception 5: Cloud data warehouses reduce control over your deployment


Some DBAs believe that cloud data warehouses lack the control and flexibility of on-prem data warehouses, making it harder to respond to security threats, performance issues or disasters. In reality, cloud data warehouses have evolved to provide the same control maturity as on-prem warehouses. Cloud warehouses also provide a host of additional capabilities such as failover to different data centers, automated backup and restore, high availability, and advanced security and alerting measures. Organizations looking to increase adoption of ML are turning to cloud data warehouses that support new, open data formats to catalog, ingest, and query unstructured data types. This functionality provides access to data by storing it in an open format, increasing flexibility for data exploration and ML modeling used by data scientists, facilitating governed data use of unstructured data, improving collaboration, and reducing data silos with simplified data lake integration.

Additionally, some DBAs worry that moving to the cloud reduces the need for their expertise and skillset. However, in reality, cloud data warehouses only automate the operational management of data warehousing such as scaling, reliability and backups, freeing DBAs to work on high value tasks such as warehouse design, performance tuning and ecosystem integrations.

By addressing these five misconceptions of cloud data warehouses and understanding the nuances, advantages, trade-offs and total cost ownership of both delivery models, organizations can make more informed decisions about their hybrid-cloud data warehousing strategy and unlock the value of all their data.

Getting started with a cloud data warehouse


At IBM we believe in making analytics secure, collaborative and price-performant across all deployments, whether running in the cloud, hybrid, or on-premises. For those considering a hybrid or cloud-first strategy, our data warehousing SaaS offerings including IBM Db2 Warehouse and Netezza Performance Server, are available across AWS, Microsoft Azure, and IBM Cloud and are designed to provide customers with the availability, elastic scaling, governance, and security required for SLA-backed, mission critical analytics.

When it comes to moving workloads to the cloud, IBM’s Expert Labs migration services ensure 100% workload compatibility between on-premises workloads and SaaS solutions.

No matter where you are in your journey to cloud, our experts are here to help customize the right approach to fit your needs. See how you can get started with your analytics journey to hybrid cloud by contacting an IBM database expert today.

Source: ibm.com

Thursday, 23 February 2023

Sustainability begins with design

IBM Exam, IBM Exam Prep, IBM Exam Study, IBM Exam Career, IBM Exam Skill, IBM Exam Jobs, IBM Exam Certifications

If you want to make sustainable products today, dabbling at the edges no longer suffices. You must start at the design phase. For example, 80% of a product’s lifetime emissions is determined by product design.

Achieving sustainability demands a transformation of thought. While 86% of companies have a sustainability strategy—with 73% of those set on a net-zero carbon emissions goal—only 35% act on that strategy. Backward-looking initiatives, like retrofitting products or alternate maintenance schedules, can make a dent in the collective footprint, but it’s forward-looking initiatives that make lasting change.

Designing for sustainability


Designing for sustainability (D4S) will never practically deliver a net-zero environmental impact, but there are five principles that can help companies make a meaningful impact on their sustainability strategies:

1. Reduction in material: the least complex of the five, this looks at improvements in technologies to reduce the amount of material and energy used in production.

2. Modular design: subdividing sophisticated systems into simple modules in order to more efficiently organize complex processes.

3. Design for longevity: extending the use phase of a product by integrating business knowledge, market conditions, company capabilities, technical possibilities and user needs into product concepts to make better strategic decisions.

4. Investing in simulation: making computer-generated models and simulated environments to model, manipulate, and test parts/assemblies before spending time and money on production.

5. Design for recycling: encouraging manufacturers to account for the end of a product’s useful life by considering what else it can become during the design-stage of a product’s development.

Successful development of increasingly complex products demanded today is only possible by adopting an integrated development lifecycle management approach. This approach frees up development resources from repetitive and mundane tasks to focus more of novel solutions, provides more data insight that will open new design opportunities and improves the collaboration within and between teams to explore alternative approaches.

It’s important for industries to adopt a key metric of sustainability in their design and development efforts. This metric must have the same weight as time-to-market, cost and profitability.

Aerospace: a case study


Let’s consider aerospace as a case study. We know that airplanes are heavy and inefficient, but how do we avoid, and reverse, the roughly 4% they contribute to climate change? Understanding the inherent and lifetime challenges at stake, Honeywell Aerospace designed an innovative thermal management system that is 300 times lower on the Global Warming Potential.

By adopting a sustainability mindset and leveraging IBM’s Engineering Lifecycle Management, Honeywell:

1. Reduced the product development cycle time from 48 months to 24 months while simultaneously reducing R&D investment by 30%

2. Adapted agile practices to a large, integrated system that includes parallel development of multiple sub- assemblies, including both software and hardware

3. Overcame the known challenges of applying agile practices to hardware development by deploying a Scaled Agile Framework to manage integrated features across teams

4. Utilized new and novel tools, adapted organizational constructs and challenged long held beliefs, like those about waterfall style program management

As a result, Honeywell’s innovative solution delivered 35% lighter and 20% more efficient than conventional units, leverages a less polluting refrigerant, while improving their time-to-market and lowering development costs. Honeywell Aerospace plans to adopt this comprehensive development strategy more broadly across a variety of projects.

Rethink sustainability by pre-thinking it


Organizations combining commitment to sustainability with execution capabilities—and integrating this effort with digital transformation—create win-win situations. For example, while Honeywell’s design was for aircraft sustainability, their technologies will change the way we commute for work and pleasure, enabling people to move 50-100 miles away from their places of work. The effect is collective and societal.

IBM is committed to helping clients adopt and leverage best-practices of integrated development lifecycle management. Offering an integrate portfolio for managing requirements, workflow and testing, as well as systems design modeling. This toolset offers a federated data approach which optimizes information sharing and leveraging across the entire development lifecycle, makes data and processes transparent and traceable, enables better regulatory and compliance adherence, and provides better data currency to improve critical business and development decisions.

Source: ibm.com

Tuesday, 21 February 2023

Six innovation strategies for Life Science organizations in 2023

IBM Exam, IBM Exam Prep, IBM Certification, IBM Prep, IBM Preparation, IBM PDF, IBM Career, IBM Skills, IBM Jobs

Last year, as life sciences organizations were consumed by the recovery from COVID-19, their focus had to shift rapidly to mitigating supply chain constraints, labor and skill shortages, and by the end of the year, inflationary pressures—all of which were exacerbated by the Russia-Ukraine war.

Along with these challenges, the ongoing drive to reduce costs, improve efficiency and productivity, drive better decision-making and reduce risk will continue to drive pharma investment in cloud, AI/ML, analytics and automation in 2023 despite higher interest rates.

Organizations must rethink their business models to serve a variety of strategic goals, and AI will play an increasingly important role in all of them: supporting drug discovery, trial diversity, forecasting and supply chain functions, and supporting engagement and adherence to decentralized trials and on-market regimens. These shifts will set the stage for further industry transformation as gene therapy and precision health become more widely available.

The macro-environment favors steady long-term focus and growth


The Inflation Reduction Act of 2022 puts pressure on companies to reduce drug and device prices in the USA, causing pharma to leverage technology to drive cost efficiencies and maintain margins. Globally, the shifting policy debate around access and affordability of patented pharmaceuticals exerts additional pressures. An inflationary environment will slow down traditional R&D and leave pharma no choice but to investigate AI/ML and related techniques to accelerate drug discovery and repurposing while reducing costs. This will also stimulate new partnerships (for example, pharma companies working with research labs or providers working with payers) through federated learning and cloud-based digital ecosystems.

Manufacturing costs and costs of clinical trials will continue to rise. The cost of active pharmaceutical ingredients has increased by up to 70% since 2018. Recruiting on-site patients and maintaining on-site trials remains prohibitively expensive. Pharma will focus on digital patient recruitment through social advertising and digital engagement (including wearables) throughout trials to manage adherence and persistence. Decentralized trial structures will push pharma to focus more closely on cybersecurity and protected health information (PHI) while managing the associated costs.

The costs of operating manufacturing facilities will increase as energy prices continue to rise. We predict companies will make significant strides towards digitization to reduce cost, improve quality, reduce recalls and improve safety. Pre-digital facilities with manual processes supported by expensive labor will no longer be the norm.

Strategy 1: Prioritize around novel drug development, generics or consumer engagement


Strategic reprioritization should be top of mind for the C-suite. Last year, Pfizer and GSK left the consumer health sector to prioritize novel drug and vaccine development and core innovation. We saw targeted mergers and acquisitions to replenish pipelines, which will continue in 2023. Novartis spun off Sandoz, their generics business, and is streamlining their research efforts around innovative pharmaceuticals. Sanofi and the generics giant Sun Pharmaceuticals are repositioning to enter specialty pharma with the latter releasing a new highly competitive biologic for psoriasis.

Companies are recognizing that divergent business models are required: innovative pharma requires significant capital investments to support state-of-the-art research enabled by new technology, while generic and consumer health business models demand unparalleled scale, access to distribution and close partnerships with pharmacies. A rising risk-return ratio for innovators will lead companies to form research and technology partnerships that combine top talent with the most innovative computation techniques, rather than outright mergers or acquisitions.

Strategy 2: Use AI-driven forecasting and supply chains to improve operational efficiency and sustainability


The upheaval and disruption caused by COVID-19 has given pharma leaders a heightened awareness of resiliency in delivering innovative drugs and therapeutics to communities, underscoring the importance of investing in forecasting and supply chains.

Forecasting

Forecasting continues to be a pain point for many organizations. According to a recent IBM IBV study, 57% of CEOs view the CFO as playing the most crucial role in their organizations over the next two to three years. Legacy processes, demand volatility and increasing data scale and complexity demand a new approach. Traditional quarterly forecasting cycles (which are manual and burdensome) yield inaccurate predictions.

Leading companies will invest in AI, ML and intelligent workflows to deliver end-to-end forecasting capabilities that utilize real-time feeds from multiple data sources, leveraging hundreds of AI and ML models, to deliver more granular and accurate forecasts and customer insights. These capabilities will fundamentally change the role of finance organizations by emphasizing speed of insight, adoption of data-driven decision making and scaling of analytics within the enterprise.

Supply chain

Business leaders will focus on supply chain solutions that drive transparency across sourcing, manufacturing, delivery and logistics while minimizing cost, waste and time. CSCOs are modernizing supply chain operations by using AI to leverage unstructured data in real time and integrating automation, blockchain and edge computing to manage operations and collect and connect information across multiple sources.

IBM Exam, IBM Exam Prep, IBM Certification, IBM Prep, IBM Preparation, IBM PDF, IBM Career, IBM Skills, IBM Jobs
Priorities driving supply chain innovation, according to CSCOs

In the wake of COVID-19, we observe leaders viewing the supply chain as a core organizational function rather than a supportive one. David Volk, executive director of clinical supply chain planning at Roche states, “We are a networked organization… collaborating much more broadly across all our partners and the industry. We view ourselves as a supply chain organization, and a significant part of the value we bring to patients lies in optimizing our global supply chain and inventory. That’s a very different mindset, and it’s changed how we run the organization.”

Supply chain sustainability also ranks among the highest priorities for CEOs. 48% of CEOs surveyed say increasing sustainability is a top priority—up 37% since 2021. 44% cite a lack of data-driven insights as barriers to achieving sustainability objectives. End-to-end visibility into sustainability impact, such as metrics on emissions and waste from raw material to delivery, will unlock a new level of information that position CSCOs as key enablers for companies to achieve their sustainability and ESG vision.

Strategy 3: Prepare for an influx of cell and gene therapies


Gene therapy is the new frontier of medicine. It focuses on targeting a person’s genes for modification to treat or cure disease, including cancer, genetic diseases and infectious diseases. The US Food & Drug Administration (FDA) approved the first gene therapy in the United States in 2017. Since then, more than 20 cell and gene therapy products have been approved.

According to the Alliance for Regenerative Medicine, we could see five more gene therapies for rare diseases introduced to the U.S. market in 2023, including new treatments for sickle cell disease, Duchenne muscular dystrophy and hemophilia.

These therapies will challenge life sciences organizations to rethink their business models. How will they efficiently determine which patients are eligible for these therapies? How will they obtain the patient’s blood as part of the therapy? How will they contract with payers for reimbursement, given these therapies can cost upwards of $3M per treatment? How will they track outcomes from treatment for outcome-based agreements? These questions and many more spanning payment models, consumer experience, supply chain and manufacturing will need to be addressed.

A key driver in the growth of gene therapies and adoption of precision health is the growth and accessibility of next-generation DNA sequencing (NGS). NGS will become more mainstream, moving the science out of the lab to deliver improved patient care and outcomes at scale. NGS delivers ultra-high throughput, scalability and speed and has revolutionized the industry by enabling a wide variety of analyses across multiple applications at a level never before possible. This includes delivering whole-genome sequencing at accessible and practical price points for researchers, scientists, doctors and patients. An example is the new Illumina NovaSeq X sequencer released in September 2022, which is twice as fast as prior models and capable of sequencing 20,000 genomes per year at a cost of $200 per genome. As the price of sequencing genomes declines, the ability to support personalized healthcare and gene therapy at scale will continue to grow.

Strategy 4: Accelerate development and delivery of lifesaving therapies through decentralized clinical trials


Limitations of traditional clinical trials were amplified during the COVID-19 pandemic and have accelerated the use of decentralized clinical trials (DCTs). There is a clear need to improve study formats so broader, more equitable populations are accessed and included. New technologies will help integrate patient data points and derive holistic insights like never before. Life sciences organizations will increase their use of DCTs to run global studies and bring new therapies to market. We expect a record number of decentralized trials in 2023.

Key benefits of DCTs include:

◉ Faster recruitment. Participants can be identified and engaged without the need to travel and be evaluated in person.
◉ Improved retention. Participants are less likely to drop out of a trial due to the typical in-person requirements.
◉ Greater control, convenience and comfort. Participants are more comfortable engaging at home and at local patient care sites.
◉ Increased diversity. Participants in legacy trials lacked diversity and contributed to gaps in understanding of diseases.

As DCTs are more broadly adopted, designing trials around the patient experience will be critical to ensuring clear, transparent engagement and willing and active participation. Methodologies such as Enterprise Design Thinking can provide a useful framework. Likewise, integrating patient data from multiple sources such as electronic health and medical records, electronic data capture platforms, clinical data management systems, wearables and other digital technologies will require a more open approach to information sharing.

Quantum computing will enable more advanced DCT capabilities for recruitment, trial site selection, and optimization and patient surveillance. Quantum-based algorithms can outperform existing computer algorithms, enabling better analysis of integrated patient data at scale.

In the coming years, decentralized trials will become the norm, improving the ability to recruit, select and deliver clinical trials at scale, ensuring full and diverse populations are represented and lifesaving treatments are more quickly approved and launched.

Strategy 5: Explore AI-driven drug discovery


AI-driven drug discovery continues to gain momentum and achieve critical milestones. The first AI-designed drug candidate to enter clinical trials was reported by Exscientia in early 2020. Since then, companies such as Insilico Medicine, Evotec and Schrödinger have announced phase I trials. Several candidates have had their clinical development accelerated through AI-enabled solutions. Within drug companies focused on AI-based discovery, there is publicly available information on about 160 discovery programs, of which 15 products are reportedly in clinical development.

Some execs may think AI can be delivered through the “tool in the public cloud” or by a single team. From our experience working with life sciences companies, this is not the case. Achieving full value from AI requires transformation of the discovery process spanning new tech, new talent and new behaviors throughout the R&D organization.

The AI-driven discovery process delivers value across four dimensions: finding the right biological target, designing a small molecule as a preclinical candidate, improving success rates and delivering overall speed and efficiency.

Search for new biological targets

We see the research community and industry scientists pursuing integration of multiomics and clinical data with machine learning to achieve drug repositioning. Leveraging experimental data and literature analysis, it is possible to uncover new disease pathways and polypharmacological and protein interactions. Application of AI to imaging (and other diagnostic techniques that rigorously analyze phenotypic outputs) may offer opportunities to identify new biological targets. Some of our clients look to understand protein interactions, function and motion using traditional computation techniques as well as quantum computing.

Use new techniques to search for new molecules

Using a deep search technique, it is possible today to mine the research literature and published experimental data to predict new small molecule structures and how molecules will behave. This and other techniques can be used to predict pharmacokinetic and pharmacodynamic properties and help identify off-target effects.

Explore the promise of quantum computing

Since 2020, there have been numerous quantum-related activities and experiments in the field of life sciences, spanning genomics, clinical research and discovery, diagnostics, treatments and interventions. Quantum-driven machine learning, trained on diverse clinical and real-world data sets, has been applied to molecular entity generation, diagnostics, forecasting effectiveness and tailoring radiotherapy.

Strategy 6: Use digital engagement to increase sales efficiency, patient loyalty and adherence


For healthcare providers

Conventional face-to-face visits to healthcare providers (HCPs) have reached the limit of effectiveness. HCPs now expect personalized approaches and instant access to knowledge. Increased scrutiny by public authorities, along with COVID-19, disrupted a traditional approach where sales reps had HCP offices and hospitals as their second home. A virtual engagement model emerged that is less effective in its current form.

At the same time, industry sees the value of an omnichannel HCP engagement strategy: our analysis shows 5-10% higher satisfaction with a new HCP experience, 15-25% more effective marketing spend, 5-7% boost in active prescribers and up to 15% lift in recurring revenue depending on the indication.

Pharma companies have enough data on certain products to enable a personalized experience for HCPs. An analytics and AI-driven approach to engagement with clinicians provides the highest impact as it improves both their speed-to-decision and their awareness of the latest clinical evidence. Well-defined technology and data strategies, along with change management and talent identification programs, are key to success.

For patients

Adherence and persistence are major challenges in an industry that caters to chronic patients. Additionally, with new reimbursement models, payers incentivize “complete” cases that achieve prolonged remission or, for acute patients, functional recovery. To keep selling meds and getting paid for them, patients need to be taking them continuously. For many indications, patients have many pharmaceutical options. Successful companies will differentiate themselves in the market by offering digital support for their pharmaceuticals, engaging patients in their care on their smartphones through gamification and incentive programs.

IBM Exam, IBM Exam Prep, IBM Certification, IBM Prep, IBM Preparation, IBM PDF, IBM Career, IBM Skills, IBM Jobs
How technology can advance new biologics for treatment of plague psoriasis

Bills and regulations will increase the adoption and application of AI


AI underpins the trends mentioned above. While AI technology has been around for decades, its adoption in life sciences has accelerated over the last several years, impacting drug development, clinical trials and supply chains. AI is infused into many of our daily interactions, from calling an airline to rebook tickets, asking Alexa to play music and turn on the lights, receiving an approval for a loan, to providing automated treatment recommendations to patients based on their clinical history and the latest treatment guidelines.

As AI continues to permeate our lives, oversight will be front and center. Both the United States and EU consider regulation to be essential to the development of AI tools that consumers can trust. Life sciences companies must understand the impact AI regulations have on their business models and that they play a proactive role in influencing these policies in the interest of better patient outcomes.

As an example, IBM’s Policy Lab takes a proactive approach to providing policymakers with a vision and actionable recommendations to harness the benefits of innovation while ensuring trust in a world being reshaped by data. IBM works with organizations and policymakers to share our perspective to support responsible innovations. One such bill was the Biden-Harris administration’s Blueprint for an AI Bill of Rights released in September 2022. As stated in the Bill, “AI systems have the potential to bring incredible societal benefits, but only if we do the hard work of ensuring AI products and services are safe and secure, accurate, transparent, free of harmful bias and otherwise trustworthy.” The Bill lays outs five commonsense protections to which everyone in America should be entitled in the design, development and deployment of AI and other automated technologies:

◉ Right to safe and effective systems. You should be protected from unsafe or ineffective systems.
◉ Algorithmic discrimination protections. You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.
◉ Data privacy. You should be protected from abusive data practices via built-in protections and have agency over how your data is used.
◉ Notice and explanation. You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
◉ Human alternatives, consideration and fallback. You should be able to opt out where appropriate and have access to a person who can quickly consider and remedy problems you encounter.

The application of AI is not slowing down, nor is scrutiny of it. Life sciences organizations will differentiate themselves by having a seat at the table. They will seek opportunity to influence AI-health policy and deliver ethical and responsible AI-powered solutions that augment their existing product portfolio and improve patient and provider experiences and healthcare outcomes at reduced costs.

Embrace new technologies to offer major advances


Life sciences companies, particularly in pharma and biotech, can prove resilient despite inflationary pressures. They must focus on business model specialization across innovation and invention, generics business and consumer health. Strong demand can help companies overcome business challenges and position the industry for steady innovation-led growth. It is crucial to embrace new technologies, particularly state-of-the-art computing and AI, to offer major advances that may represent a paradigm shift in drug discovery, clinical trial site optimization, and, ultimately, engagement with a person receiving care. Acting boldly in 2023 with a clearly articulated strategy and prioritization will set both mature life sciences organizations and new players on the right path. Companies that focus on strategy and innovation will be the biggest winners.

Source: ibm.com

Saturday, 18 February 2023

5 ways to get metaverse-ready right now

Digital Twin, IBM Exam, IBM Exam Prep, IBM Exam Prepararation, IBM Tutorial and Materials, IBM Career, IBM Skills, IBM Jobs, IBM Certification

Building customer experiences for the metaverse isn’t about the headset, it’s about the mindset


As a specialist in spatial computing and extended reality (XR), Jeffrey Castellano helps clients and partners separate hype from the real-world value in this ecosystem of emerging technologies known as the metaverse. For him, the focus should never be on virtual reality (VR) headsets but on improving customer experiences.

Focusing on technology alone can leave the mistaken impression that businesses can’t really start building until a definitive metaverse platform emerges. Castellano see the metaverse evolving, just as the internet evolved. The internet isn’t a technology. It uses a lot of technology, but it’s just a conceptual framework for connecting information. The metaverse is the same—it’s about the connection points. This distinction is important: it doesn’t matter whether a hardware manufacturer releases a headset, or a technology platform goes crashing into flames—the cultural drivers are why companies are here, trying to find opportunity while they can. We’re here to openly facilitate improve customer experiences through the lens of a cultural and paradigm shift.

Castellano says a technology focus can lead people to build for the metaverse backward—like “starting in the end zone.” Instead, Castellano recommends designing for the customer experiences you already have. “Look for opportunities to add to things that already exist—to make existing experiences better by adding metaverse-like features.” That could be anything from creating avatars in meeting software for more lifelike collaborations to building a digital twin of a jet engine so engineers can troubleshoot problems without being there in real life.

In the context of designing these better user experiences, haptics and augmented reality (AR) glasses are just a few tools among many. The metaverse is an evolving collection of technologies that immerse users in a real-time social network of commerce-connected 3D spaces. That can include virtual reality—characterized by persistent virtual worlds that continue to exist even when you’re not there—as well as augmented reality, which combines aspects of digital worlds and physical worlds. But virtual spaces like the ones accessible in online multiplayer video games such as Fortnite can be played on existing PCs, game consoles and phones. These count as part of the metaverse too—which is why organizations don’t have to wait to start building.

Researchers at Gartner expect that by 2026 one-quarter of us will spend at least one hour in a metaverse to work, play or learn every day. With such a vast potential customer pool—J.P. Morgan envisions a metaverse market opportunity of over USD 1 trillion—enterprises can’t wait for some great platform shakeout to start making their metaverse ambitions real. Building toward the metaverse in incremental steps makes the investment much less risky. Each step will generate return on investment—so much so, Castellano says, that “the end goal will start paying for itself.”

Here are five action steps businesses can take to create next-generation customer experiences that evolve alongside the metaverse itself.

1. Define your metaverse


There is no silver bullet solution that will unlock the metaverse. Every organization needs to develop an approach and experiences that are informed by its own needs and goals—be it enabling 3D try-before-you-buy options or virtual lounges where consumers can interact—so that those experiences make sense for its customers and employees. The metaverse implementations will look, sound and feel very different for a retailer versus a bank versus a manufacturer. Working to define your metaverse, sometimes called a microverse (think internet compared to intranet) will keep you focused and ensure a strategically sound approach. Success really depends on “knowing what your people are asking for,” Castellano says. This report from the IBM Institute for Business Value provides examples of how enterprises are approaching this strategy.

2. Map the user experience


From 3D interactions to spatial computing, virtual world-building to identity management, the options for creating immersive experiences are vast. Organizations building for the metaverse must think of themselves as orchestrators of their customers’ experience. Start by mapping potential customer journeys in the metaverse against the use cases that are already working for your customer base to develop digital assets and metaverse experiences that flow organically from existing paths.

3. Architect your infrastructure


For enterprises, the operative term guiding metaverse development should be “horizontal enablement.” Your metaverse can’t be an experience that functions in isolation. You need to architect your infrastructure so your existing technology can extend into your emerging metaverse, buttressing a unified back end to enable ever-more diverse experiences on the front end. As Castellano says, “We’d rather build experiences that you can own, that fit into your ecosystem and existing APIs, enhancing key moments with metaverse moments for the audience you have at hand.”

4. Prepare your people


To extend your digital customer experiences into the third dimension, your teams need to be equipped to work in that environment. As you build your technological capacity, it is also time to ensure your people are developing the capacity to support and operate in these digital worlds and social platforms. From production workflows, blockchain technology and the interoperability of systems to managing virtual storefronts, cryptocurrency and non-fungible tokens (NFTs), data skills training empowers teams to develop specialization tools as a competitive differentiator.

5. Secure the virtual perimeter


New worlds bring new opportunities for malware, identity theft and other data threats. As more customers flock to the metaverse, brands are struggling with security and identity around avatars, portals, access and digital currencies, including crypto. For the 12th year in a row, the United States holds the title for the highest cost of a single data breach at USD 9.44 million—more than USD 5 million greater than the global average. If you aren’t integrating smart identity, advanced cryptography and access management tools into experiences within the metaverse, you risk not just data breaches but the erosion of customer loyalty. And you want those consumers to stick around long enough to experience the promise of the metaverse you’re starting to build.

The prospect of taking bold steps into the metaverse can feel daunting when so much uncertainty remains about how everyday users will connect with it. But the same was true in the early days of the web, social media and mobile technology. The common thread running through all those moments of dramatic technological change? Those who waited to see how things shook out were inevitably left behind. The metaverse is still in its discovery stage— which is just the right time to embark on the voyage. “We’re not talking about some sci-fi future,” Castellano says. “We’re talking about now.”

Source: ibm.com

Thursday, 16 February 2023

A step-by-step guide to setting up a data governance program

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

In our last blog, we delved into the seven most prevalent data challenges that can be addressed with effective data governance. Today we will share our approach to developing a data governance program to drive data transformation and fuel a data-driven culture.

Data governance is a crucial aspect of managing an organization’s data assets. The primary goal of any data governance program is to deliver against prioritized business objectives and unlock the value of your data across your organization.

Realize that a data governance program cannot exist on its own – it must solve business problems and deliver outcomes. Start by identifying business objectives, desired outcomes, key stakeholders, and the data needed to deliver these objectives. Technology and data architecture play a crucial role in enabling data governance and achieving these objectives.

Don’t try to do everything at once! Focus and prioritize what you’re delivering to the business, determine what you need, deliver and measure results, refine, expand, and deliver against the next priority objectives. A well-executed data governance program ensures that data is accurate, complete, consistent, and accessible to those who need it, while protecting data from unauthorized access or misuse.

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

Consider the following four key building blocks of data governance:

◉ People refers to the organizational structure, roles, and responsibilities of those involved in data governance, including those who own, collect, store, manage, and use data.

◉ Policies provide the guidelines for using, protecting, and managing data, ensuring consistency and compliance.

◉ Process refers to the procedures for communication, collaboration and managing data, including data collection, storage, protection, and usage.

◉ Technology refers to the tools and systems used to support data governance, such as data management platforms and security solutions.

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

For example, if the goal is to improve customer retention, the data governance program should focus on where customer data is produced and consumed across the organization, ensuring that the organization’s customer data is accurate, complete, protected, and accessible to those who need it to make decisions that will improve customer retention.

It’s important to coordinate and standardize policies, roles, and data management processes to align them with the business objectives. This will ensure that data is being used effectively and that all stakeholders are working towards the same goal.

Starting a data governance program may seem like a daunting task, but by starting small and focusing on delivering prioritized business outcomes, data governance can become a natural extension of your day-to-day business.

Building a data governance program is an iterative and incremental process


Step 1: Define your data strategy and data governance goals and objectives

What are the business objectives and desired results for your organization? You should consider both long-term strategic goals and short-term tactical goals and remember that goals may be influenced by external factors such as regulations and compliance.

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

A data strategy identifies, prioritizes, and aligns business objectives across your organization and its various lines of business. Across multiple business objectives, a data strategy will identify data needs, measures and KPIs, stakeholders, and required data management processes, technology priorities and capabilities.

It is important to regularly review and update your data strategy as your business and priorities change. If you don’t have a data strategy, you should build one – it doesn’t take a long time, but you do need the right stakeholders to contribute.

Once you have a clear understanding of business objectives and data needs, set data governance goals and priorities. For example, an effective data governance program may:

◉ Improve data quality, which can lead to more accurate and reliable decision making
◉ Increase data security to protect sensitive information
◉ Enable compliance and reporting against industry regulations
◉ Improve overall trust and reliability of your data assets
◉ Make data more accessible and usable, which can improve efficiency and productivity.

Clearly defining your goals and objectives will guide the prioritization and development of your data governance program, ultimately driving revenue, cost savings, and customer satisfaction.

Step 2: Secure executive support and essential stakeholders

Identify key stakeholders and roles for the data governance program and who will need to be involved in its execution. This should include employees, managers, IT staff, data architects, and line of business owners, and data custodians within and outside your organization.

An executive sponsor is crucial – an individual who understands the significance and objectives of data governance, recognizes the business value that data governance enables, and who supports the investment required to achieve these outcomes.

With key sponsorship in place, assemble the team to understand the compelling narrative, define what needs to be accomplished, how to raise awareness, and how to build the funding model that will be used to support the implementation of the data governance program.

The following is an example of typical stakeholder levels that may participate in a data governance program:

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

By effectively engaging key stakeholders, identifying and delivering clear business value, the implementation of a data governance program can become a strategic advantage for your organization.

Step 3: Assess, build & refine your data governance program

With your business objectives understood and your data governance sponsors and stakeholders in place, it’s important to map these objectives against your existing People, Processes and Technology capabilities to achieve these objectives.

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

Data management frameworks such as the EDM Council’s DCAM and CDMC offer a structured way to assess your data maturity against industry benchmarks with a common language and set of data best practices.

Look at how data is currently being governed and managed within your organization. What are the strengths and weaknesses of your current approach? What is needed to deliver key business objectives?

Remember, you don’t have to (nor should you) do everything at once. Identify areas for improvement, in context of business objectives, to prioritize your efforts and focus on the most important areas to deliver results to the business in a meaningful way. An effective and efficient data governance program will support your organization’s growth and competitive advantage.

Step 4: Document your organization’s data policies

Data policies are a set of documented guidelines for how an organization’s data assets are consistently governed, managed, protected and used. Data policies are driven by your organization’s data strategy, align against business objectives and desired outcomes, and may be influenced by internal and external regulatory factors. Data policies may include topics such as data collection, storage, and usage, data quality and security:

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

Data policies ensure that your data is being used in a way that supports the overall goals of your organization and complies with relevant laws and regulations. This can lead to improved data quality, better decision making, and increased trust in the organization’s data assets, ultimately leading to a more successful and sustainable organization. 

Step 5: Establish roles and responsibilities

Define clear roles and responsibilities of those involved in data governance, including those responsible for collecting, storing, and using data. This will help ensure that everyone understands their role and can effectively contribute to the data governance effort.

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

The structure of data governance can vary depending on the organization. In a large enterprise, data governance may have a dedicated team overseeing it (as in the table above), while in a small business, data governance may be part of existing roles and responsibilities. A hybrid approach may also be suitable for some organizations. It is crucial to consider company culture and to develop a data governance framework that promotes data-driven practices. The key to success is to start small, learn and adapt, while focusing on delivering and measuring business outcomes.

Having a clear understanding of the roles and responsibilities of data governance participants can ensure that they have the necessary skills and knowledge to perform their duties.

Step 6: Develop and refine data processes

Data governance processes ensure effective decision making and enable consistent data management practices by coordinating teams across (and outside of) your organization. Additionally, data governance processes can also ensure compliance with regulatory standards and protect sensitive data.

Data processes provide formal channels for direction, escalation, and resolution. Data governance processes should be lightweight to achieve your business goals without adding unnecessary burden or hindering innovation.

Processes may be automated through tools, workflow, and technology.

It is important to establish these processes early to prevent issues or confusion that may arise later in the data management implementation.

Step 7 – Implement, evaluate, and adapt your strategy

Once you have defined the components of your data governance program, it’s time to put them in action. This could include implementing new technologies or processes or making changes to existing ones.

IBM Exam, IBM Certification, IBM Career, IBM Skills, IBM Jobs, IBM Prep, IBM Preparation, IBM Guides, IBM Learning

It is important to remember that data governance programs can only be successful if they demonstrate value to the business, so you need to measure and report on the delivery of the prioritized business outcomes. Regularly monitoring and reviewing your strategy will ensure that it is meeting your goals and business objectives.

Continuously evaluate your goals and objectives and adjust as needed. This will allow your data governance program to evolve and adapt to the changing needs of the organization and the industry. An approach of continuous improvement will enable your data governance program to stay relevant and deliver maximum value to the organization.

Get started on your data governance program


In conclusion, by following an incremental structured approach and engaging key stakeholders, you can build a data governance program that aligns with the unique needs of your organization and supports the delivery of accelerated business outcomes.

Implementing a data governance program can present unique challenges such as limited resources, resistance to change and a lack of understanding of the value of data governance. These challenges can be overcome by effectively communicating the value and benefits of the program to all stakeholders, providing training and support to those responsible for implementation, and involving key decision-makers in the planning process.

By implementing a data governance program that delivers key business outcomes, you can ensure the success of your program and drive measurable business value from your organization’s data assets while effectively manage your data, improving data quality, and maintaining the integrity of data throughout its lifecycle.

Source: ibm.com