Saturday, 30 April 2022

How a data fabric overcomes data sprawls to reduce time to insights

IBM Exam Study, IBM Exam Preparation, IBM Career, IBM Skills, IBM Learning, IBM Jobs, IBM Preparation

Data agility, the ability to store and access your data from wherever makes the most sense, has become a priority for enterprises in an increasingly distributed and complex environment. The time required to discover critical data assets, request access to them and finally use them to drive decision making can have a major impact on an organization’s bottom line. To reduce delays, human errors and overall costs, data and IT leaders need to look beyond traditional data best practices and shift toward modern data management agility solutions that are powered by AI. That’s where the data fabric comes in.

Read More: C2090-913: IBM Informix 4GL Development

A data fabric can simplify data access in an organization to facilitate self-service data consumption, while remaining agnostic to data environments, processes, utility and geography. By using metadata-enriched AI and a semantic knowledge graph for automated data enrichment, a data fabric continuously identifies and connects data from disparate data stores to discover relevant relationships between the available data points. Consequently, a data fabric self-manages and automates data discovery, governance and consumption, which enables

enterprises to minimize their time to value. You can enhance this by appending master data management (MDM) and MLOps capabilities to the data fabric, which creates a true end-to-end data solution accessible by every division within your enterprise.

Data fabric in action: Retail supply chain example

To truly understand the data fabric’s value, let’s look at a retail supply chain use case where a data scientist wants to predict product back orders so that they can maintain optimal inventory levels and prevent customer churn.

Problem: Traditionally, developing a solid backorder forecast model that takes every factor into consideration would take anywhere from weeks to months as sales data, inventory or lead-time data and supplier data would all reside in disparate data warehouses. Obtaining access to each data warehouse and subsequently drawing relationships between the data would be a cumbersome process. Additionally, as each SKU is not represented uniformly across the data stores, it is imperative that the data scientist is able to create a golden record for each item to avoid data duplication and misrepresentation.

Solution: A data fabric introduces significant efficiencies into the backorder forecast model development process by seamlessly connecting all data stores within the organization, whether they are on-premises or on the cloud. It’s self-service data catalog auto-classifies data, associates metadata to business terms and serves as the only governed data resource needed by the data scientist to create the model. Not only will the data scientist be able to use the catalog to quickly discover necessary data assets, but the semantic knowledge graph within the data fabric will make relationship discovery between assets easier and more efficient.

The data fabric allows for a unified and centralized way to create and enforce data policies and rules, which ensures that the data scientist only accesses assets that are relevant to their job. This removes the need for the data scientists to request access from a data owner. Additionally, the data privacy capability of a data fabric ensure the appropriate privacy and masking controls are applied to data used by the data scientist. You can use the data fabric’s MDM capabilities to generate golden records that ensure product data consistency across the various data sources and enable a smoother experience when integrating data assets for analysis. By exporting an enriched integrated dataset to a notebook or AutoML tool, data scientists can spend less time wrangling data and more time optimizing their machine learning model. This prediction model could then easily be added back to the catalog (along with the model’s training and test data, to be tracked through the ML lifecycle) and monitored.

How does a data fabric impact the bottom line?

With the newly implemented backorder forecast model that’s built upon a data fabric architecture, the data scientist has a more accurate view of inventory level trends over time and predictions for the future. Supply chain analysts can use this information to ensure that out of stocks are prevented, which increases overall revenue and improves customer loyalty. Ultimately the data fabric architecture can help significantly reduce time to insights by unifying fragmented data on a singular platform in a governed manner in any industry, not just the retail or supply chain space.

Source: ibm.com

Thursday, 28 April 2022

Don’t underestimate the cloud’s full value

IBM, IBM Exam, IBM Exam Study, IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Career, IBM Jobs, IBM Skills

As the adoption of cloud computing grows, enterprises see the cloud as the path to modernization. But there is quite a bit of confusion about the actual value of the cloud, leading to suboptimal modernization journeys. Most people see the cloud as a location for their computing needs. While cloud service providers (CSPs) are an integral part of the modernization journey, the cloud journey doesn’t stop with picking a CSP. A more holistic understanding of the cloud will help reduce risks and help your organization maximize ROI.

Before joining IBM Consulting, I was an industry analyst focused on cloud computing. As an analyst, I regularly spoke to various stakeholders about their respective organization’s modernization journeys. While I learned about many success stories, I also heard about challenges during suboptimal modernization efforts. Some organizations treated the cloud as a destination rather than exploiting its fullest potential. Others couldn’t move to the cloud due to data issues, a local lack of cloud regions, and their need to adopt IoT/edge computing. A successful cloud journey requires understanding the cloud’s full value and using a proper framework for adoption.

Read More: C2090-011: IBM SPSS Statistics Level 1 v2

While cloud service providers transform the economic model for infrastructure, the most significant advantage offered by the cloud is the programmability of infrastructure. While many enterprises associate programmability with self-service and on-demand resource provisioning and scalability, the real value goes much further. Unless you understand this advantage, any cloud adoption as a part of the modernization journey will be suboptimal.

A programmable infrastructure provides:

◉ Self-service provisioning and scaling of resources (the usual suspects)

◉ Programmatic hooks for application dependencies, helping modernize the applications with ease

◉ A programmatic infrastructure that transforms infrastructure operations into code, leading to large-scale automation that further optimizes speed and ROI.

Let’s unpack that third point a bit further. Modern enterprises are using programmatic infrastructure with code-driven operations, DevOps, modern architectures like microservices and APIs to create an assembly line for value creation. Such an approach decouples organizations from undifferentiated heavy lifting and helps them focus on the core customer value. With modern technologies like the cloud and a reliable partner to streamline the value creation process in an assembly line approach, enterprises can maximize the benefits of their modernization journey.

By treating Infrastructure as Code (IaC), you can achieve higher levels of automation. This, combined with the elastic infrastructure underneath, allows organizations to achieve the global scale they need to do business today. The code-driven operational process on top of a programmable infrastructure offers more value than the underlying infrastructure itself. IaC-driven automation reduces cost, increases speed, and dramatically removes operational problems by reducing human-induced errors.

Enterprises today have their applications deployed on cloud service providers. Some applications are nearing the end of life or have strict regulatory requirements to stay on-premises. Edge computing and IoT are fast gaining adoption among many enterprises. These trends shift our thinking about computing from a localized paradigm like CSPs and data centers to a more fluid paradigm of computing everywhere.

Everything from infrastructure to DevOps to applications are driven by automation, which allows organizations to scale efficiently. When this is married to the concept of cloud fabric that spans public cloud providers, data centers and edge, organizations can gain value at scale without worrying about how to manage different resources in different locations. This hybrid approach to the cloud can deliver value at the speed of a modern business.

Source: ibm.com

Tuesday, 26 April 2022

The Metaverse of Intellectual Property

IBM, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Exam Certification, IBM Guides, IBM Career, IBM Exam Study

Intellectual property is a broad term. Generally speaking, it is understood to refer to creations of the mind. Of course this includes major Media where content such as written or spoken word, movies, characters, songs, graphics, streaming media, and more — are the exclusive creation of the owner, individual, or individuals, who created the content. Owners have the right (within reason) to do what they will with their content, including monetizing it. Any person or any thing that monetizes content without the expressed written permission of the owner is said to be violating intellectual property law. As such, violations may be subjected to severe economic penalties.

Of course, in today’s internet and it’s evolution, advances in blockchain technology, and all new forms of media, it’s meaning, and application of intellectual property have changed dramatically. The latest challenge to the area of intellectual property law is also quickly developing in the Metaverse and how it’s commonly understood.

Simply stated, the Metaverse is a rising set of new technology driven digital experiences that are taking place through devices driven by new cloud computing models, the internet and network connectivity. It is understood to be some form of virtual reality with a wide array of digital components. Individuals will be able to conduct meetings, to learn, play games, engage in social interactions, and more.

Of course, while the Metaverse is still evolving, there is no question that it will allow individuals to create their own spaces for interactions. It will also certainly allow individuals to create content, or use content, that will be protected by intellectual property law. As you can imagine, the Metaverse presents content creators and owners with a wide array of potential challenges when it comes to tracking their Intellectual Property. These challenges will have massive implications for media companies and the future of content creation broadly.

How You Can Protect Intellectual Property in a Digital Age

While the Metaverse may be the next frontier of experience and technology, there is good news for individuals who have business models based around content creation: This has been done before. Intellectual property can clearly be protected and is protected even in a digital age in which theft of digital assets occurs with staggering regularity.

Some common strategies that have been used to protect intellectual property includes:

1. Copywriting particularly sensitive or important materials.

2. Relying on appropriate contract clauses to ensure that there is no dispute about the ownership of content.

3. Deploying AI technology to identify violations and theft of digital intellectual property.

4. Increasing staff resources to identify and enforce IP law.

There’s even more good news evolving with use of blockchain. The blockchain is often misunderstood as being exclusively the realm of cryptocurrency. In truth, the blockchain can be used in conjunction with smart contracts. Smart contracts allow for digital property (such as NFTs) to be exchanged and tracked. They ensure that there is never a question over ownership and that commerce will be supported when digital assets are shared under certain and proper conditions.

It’s likely that the blockchain and smart contracts will evolve to be an extremely helpful and essential technology when it comes to the protection of intellectual property. The fundamental characteristics of blockchain absolutely  align to support what is required for IP protection. As a distributed system, it simply cannot be altered without the consent of both parties. Hacking the blockchain is virtually impossible. The blockchain, by design, can be used to ensure that there is no question about the ownership or rights of intellectual property.

Intellectual Property and The Metaverse

The Digital Millennium Copyright Act — passed by the United States in 1998 — was a vitally necessary update to copyright law that has proven to be a bedrock tool when it comes to protecting intellectual property in a digital environment. It allows for DMCA “take-down” notices to be sent by digital companies. These notices provide an enforcement mechanism for when one person has been accused of violating the intellectual property rights of another. As such, they are a nearly invaluable tool that can help to protect another person or business’s assets.

Many questions remain about the Metaverse, but this much seems clear: As a digital space, assets that were used or created in or copied into the Metaverse should be protected by the DMCA. Enforcement will unquestionably be a challenge, and a slew of new questions will absolutely arise. However, at least in theory, individuals who create intellectual property for or in the Metaverse should have their assets protected, but if you can’t find or experience it, how can you protect it?

The Role of Artificial Intelligence 

Artificial Intelligence has long been deployed by media companies and big tech, including Google, Netflix, Microsoft, Amazon and IBM. AI can help to enforce Intellectual Property law by identifying potential violations. Today it’s clear that there are companies that have entire business models dedicated to such technology and capabilities.

However, a key question remains: How can AI be deployed in the Metaverse to enforce the protection of Intellectual Property? How can it be used along with the blockchain?

IBM, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Exam Certification, IBM Guides, IBM Career, IBM Exam Study
Companies like IBM have used AI for everything from customer care, advanced cloud and network orchestration, and cybersecurity. However, AI can be targeted to find intellectual property violations. For example, an AI algorithm can be set to scan for the unauthorized use of video, images or other digital assets. When found, legal notices can be sent to the appropriate parties, demanding the assets be taken down. The same AI can then be used to determine what sort of monetization the violator of the property has engaged in, thus allowing for the content creator to be informed and empowered to take action or compensated.

All of this begs a fundamental question: Is your content safe in the Metaverse? How can you track your content? How can you protect your creations and business models, and what sort of legal protections will be in place to do so? Many of these questions remain currently unanswered as the Metaverse continues to be built. However, there is good news: Even in a digital economy, intellectual property has survived as business models have changed. It is reasonable to figure that the same protections that have allowed intellectual property law to continue — including the use of AI — will survive into the Metaverse.

Source: ibm.com

Sunday, 24 April 2022

IBM scientists give artificial neural networks a new look for neuromorphic computing

This non-von Neumann AI hardware breakthrough in neuromorphic computing could help create machines that recognize objects just like how humans do, and solve some hard mathematical problems for applications that span chip design to flights scheduling.

IBM Scientists, Artificial Neural Networks, Neuromorphic Computing, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Learning, IBM Preparation Exam, IBM Tutorial and Material

Human brains are extremely efficient at memorizing and learning things, thanks to how information is stored and processed by building blocks called synapses and neurons. Modern-day AI implements neural-network elements that emulate the biophysical properties of these building blocks.

Still, there are certain tasks that, while easy for humans, are computationally difficult for artificial networks, such as sensory perception, motor learning, or simply solving mathematical problems iteratively, where data streams are both continuous and sequential.


Take the task of recognizing a dynamically changing object. The brain has no problem recognizing a playful cat that morphs into different shapes or even hides behind a bush, showing just its tail. A modern-day artificial network, though, would work best if only trained with every possible transformation of the cat, which is not a trivial task.

Our team at IBM’s research lab in Zurich has developed a way to improve these recognition techniques by improving AI hardware. We’ve done it by creating a new form of an artificial synapse using the well-established phase-change memory (PCM) technology. In a recent Nature Nanotechnology paper, we detail our use of a PCM memtransistive synapse that combines the features of memristors and transistors within the same low-power device — demonstrating a non-von Neumann, in-memory computing framework that enables some powerful cognitive frameworks, such as the short-term spike-timing-dependent plasticity and stochastic Hopfield neural networks for useful machine learning.

More compute room at the nano-junction


The field of artificial neural networks dates to the 1940s when neurophysiologists portrayed biological computations and learning using bulky electrical circuits and generic models. While we have not significantly digressed from the underlying fundamental ideas of what neurons and synapses do, what is different today is our capability to build neuro-synaptic circuits at the nanoscale due to advances in lithography methods and the ability to train the networks with ever-growing quantities of training data.

The general idea of building AI hardware today is to make synapses, or synaptic junctions as they are called, smaller and smaller so that more of them fit on a given space. They are meant to emulate the long-term plasticity of biological synapses: The synaptic weights are stationary in time — they change only during updates.

This is inspired by the famous notion attributed to psychologist Donald O. Hebb that “neurons that fire together wire together using long-term changes in the synaptic strengths.” While this is true, modern neuroscience has taught us that there is much more going on at the synapses: neurons wire together with long- and short-term changes, as well as local and global changes in the synaptic connections. The ability to achieve the latter we discovered was the key to building the next generation of AI hardware.

Present-day hardware that can capture these complex plasticity rules and dynamics of the biological synapses rely on elaborate transistor circuits which are bulky and energy-inefficient for scalable AI processors. Memristive devices on the other hand are more efficient but they have traditionally lacked the features needed to capture the different synaptic mechanisms.

IBM Scientists, Artificial Neural Networks, Neuromorphic Computing, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Learning, IBM Preparation Exam, IBM Tutorial and Material
Synaptic Efficacy and Phase Change Memtransistors: The top panel is a representation of the biophysical long (LTP) and short (STP) plasticity mechanisms which change the strength of synaptic connections. The middle panel is an emulation of these mechanisms using our phase change memtransistive synapse. The bottom panel illustrates how by mapping the synaptic efficacy into the electrical conductance of a device, a range of different synaptic processes can be mimicked.

PCM plasticity mimics mammalian synapses


We set out to create a memristive synapse that can express the different plasticity rules, such as long-term and short-term plasticity and their combinations using commercial PCM — at the nanoscale. We achieved this by using the combination of the non-volatility (from amorphous-crystalline phase transitions) and volatility (from electrostatics change) in PCMs.

Phase change materials have been independently researched for memory and transistor applications, but their combination for neuromorphic computing has not been previously explored. Our demonstration shows how the non-volatility of the devices can enable long-term plasticity while the volatility provides the short-term plasticity and their combination allows for other mixed-plasticity computations, much like in the mammalian brain.

We began testing our idea toward the end of 2019, starting by using the volatility property in the devices for a different application: compensating for the non-idealities in phase change computational memories. Our team fondly remembers chatting over coffee, discussing the first experimental results that would allow us to take the next step of building the synthetic synapses we originally imagined. With some learning and trial and error during the pandemic, by early 2021 we had our first set of results and things worked out the way we had imagined.

To test the utility of our device, we implemented a curated form of an algorithm for sequential learning which our co-author, Timolean Moraitis, was developing to use spiking networks for learning dynamically changing environments. From this emerged our implementation of a short-term spike-timing-dependent plasticity framework that allowed us to demonstrate a biologically inspired form of sequentially learning on artificial hardware. Instead of the playful cat that we mentioned earlier, we showed how machine vision can recognize the moving frames of a boy and a girl.

Later, we expanded the concept to show the emulation of other biological processes for useful computations. Crucially, we showed how the homeostatic phenomena in the brain could provide a means to construct efficient artificial hardware for solving difficult combinatorial optimization problems. We achieved this by constructing stochastic Hopfield neural networks, where the noise mechanisms at the synaptic junction provide efficiency gains to computational algorithms.

Looking ahead to full-scale implementation


Our results are more exploratory than actual system-level demonstrations. In the latest paper, we propose a new synaptic concept that does more than a fixed synaptic weighing operation, as is the case in modern artificial neural networks. While we plan to take this approach further, we believe our existing proof-of-principle results already elucidate significant scientific interest in the broader areas of neuromorphic engineering — in computing and understanding the brain better from its more faithful emulations.

Our devices are based on a well-researched technology and are easy to fabricate and operate. The key challenge we see is the at-scale implementation, stringing together our computational primitive and other hardware blocks. Building a full-fledged computational core with our devices will require a rethinking of the design and implementation of peripheral circuitries.

Currently, these are standard to conventional memristive computational hardware: we require additional control terminals and resources for our devices to function. There are some ideas in place, such as redefining the layouts, as well as restructuring the basic device design, which we are currently exploring.

Our current results show how relevant mixed-plasticity neural computations can prove to be in neuromorphic engineering. The demonstration of sequential learning can allow neural networks to recognize and classify objects more efficiently. This not only makes, for example, visual cognition more human-like but also provides significant savings on the expensive training processes.

Our illustration of a Hopfield neural network allows us to solve difficult optimization problems. We show an example of max cut, which is a graph coloring problem and has utility in applications such as chip design. Other applications include problems like flight scheduling, internet packets routing, and more.

Source: ibm.com

Saturday, 23 April 2022

Meet the young innovator whose global team translates climate reports

IBM Exam Study, IBM Exam Prep, IBM Career, IBM Skills, IBM Jobs, IBM Certification

When Sophia Kianni visited her parents’ home country of Iran several years ago, she was alarmed by the heavy smog that obscured an otherwise starry night sky. She was even more concerned when her relatives admitted they knew almost nothing about climate change, because few climate resources were available in their native Farsi.

In 2020, Kianni came up with an idea while in quarantine from Covid-19: enable more access to critical climate reports by making them available in more languages.

The Intergovernmental Panel on Climate Change warns that it’s do-or-die time for action on global warming limits. Recent IBM research reveals that a major driver of sustainability is on the rise: the demands of the everyday consumer. As more individuals awaken to the urgency of environmental sustainability, they push for corporations, governments and citizens of the planet to shift behaviors and support initiatives for systemic change.

Swelling underneath the tide of climate justice is the need for access to data. What happens when official climate reports aren’t available in your language? When critical climate data is siloed, fewer people can contribute to the solution.

Putting a sustainability strategy into action is a major theme at IBM’s Think 2022. Learn how to attend virtually.

Bringing data and people together to address climate justice: Climate Cardinals

As Bahamian human-environment geographer Dr. Adelle Thomas writes, “climate change has no borders — emissions contributed by one country or group have global consequences. Climate justice underscores the unfairness of countries and groups that have contributed the least to climate change being most at risk.”

Sophia Kianni’s desire to help her Iranian grandparents wake up to the urgency of the climate crisis drove her to launch what is now a global initiative to remove the silos of climate data. Her organization, Climate Cardinals, fills a gap in the climate movement: equitable access to information.

Right before graduating from high school, Ms. Kianni mobilized to organize a global network which has now grown to over 8,000 volunteers in over 40 countries, tapping mostly students earning community service hours. Since then, Climate Cardinals has been coordinating the translation of key climate reports into over 100 languages. The popularity of the organization is fueled by digital tools like TikTok and Google Classroom, which raise awareness and get volunteers onboard.

Besides being the founder and executive director of the organization, Kianni represents the US as the youngest member on the inaugural United Nations Youth Advisory Group on Climate Change. She’s also been named VICE Media’s youngest Human of the Year, a National Geographic Young Explorer, and one of Teen Vogue’s 21 under 21.

To date, Climate Cardinals has translated 500,000 words of climate information in peer-reviewed journals and media reports. Additional translations of the executive summary of UNICEF’s Children’s Climate Risk Index (CCRI) are now available in Hausa, Portuguese, Somali, Swahili and Yoruba.

In a recent interview, Kianni talked about why her mission focused on language, and how climate change disproportionately affects people of color. “That’s part of the reason why I believe it is so important to be able to educate as many people as possible, and empower a diverse coalition, especially of young people, to learn about climate change so that we have a representative view on how we need to tackle this crisis, from people who firsthand have experience with its effects.”

Kianni will speak in one of several Innovation Talks augmenting IBM Think 2022 on May 9 – 11. These talks provoke creative thinking, inspiration, and education. Learn more and add Think Broadcast to your calendar.

Source: ibm.com

Thursday, 21 April 2022

Why central banks dislike cryptocurrencies

IBM, IBM Exam Study, IBM Career, IBM Skills, IBM Jobs, IBM Certification, IBM Preparation, IBM Tutorial and Materials

Cryptocurrencies, often depicted as an escape from fiat currency and legacy banking, have become a constant focus of bank and government activity. The most recent Executive Order from the U.S. President is just one example of governments carefully considering how to deal with cryptocurrencies. With all the news, it’s easy to lose sight of the fundamentals of monetary policy and currency, and why cryptocurrencies (or “cryptos”) are not a likely replacement for fiat currencies.

In 2021 the Bank of International Settlements (which is owned by, and provides financial services to, central banks) commissioned an academic research paper entitled “Distrust or speculation? The socioeconomic drivers of U.S. cryptocurrency investments.” The research found that crypto investors were more likely to be digital banking users, male, young, and educated. More surprisingly, it found that cryptocurrency investors and users were not motivated by a distrust in traditional banking and payment services. This touted raison d’être for Bitcoin and other early cryptos seems to be a myth.

Crypto’s limits as a form of payment

Of course, the popularity of cryptos has partly been driven by high valuations and volatility, attracting attention from traders, the media and the public. But this volatility makes cryptos less than ideal for payments. Companies like Tesla and Amazon, after initially stating they would accept some cryptos as currency, have since backtracked. Why would they want to accept a currency whose value can fluctuate so dramatically on a daily basis?

Why, indeed, would anyone? Will cryptos find their way into mainstream payment systems, or will they remain a speculative investment? Much of the answer rests on an understanding of how governments, regulators, and central banks act to protect their economies and citizenry. Why “protect”? Let’s explore that with a brief look at the role of central banks.

How unstable crypto prices challenge central banks

Key roles of any country’s central bank are to:

1. Govern the safety and soundness and stability of the economy and its systems (the authority for this varies by country, but for the purposes of this piece, it’s a sufficient generalization)

2. Help to ensure the country is a safe place in which to invest for the longer term by controlling inflation

The most direct method of controlling inflation and the relative value of a currency is by setting the interest rate provided to commercial banks for their deposits and borrowings from the central bank. This largely determines the interest provided by commercial banks to their depositors and borrowers, which in turn has a direct effect on economic behaviors such as spending and saving.

Some central banks set an overt inflation target: the Bank of Canada, for example, has set one since 1991, and it resets that target with the federal government every five years. Some governments and central banks tie their economy to another economy by setting a fixed exchange rate between their fiat currency with another, such as USD or EUR. Either way, the goal is to control inflation by managing the value of the currency. A central bank’s power to control a fiat currency is largely derived from the sovereignty of the country in which it operates, with a taxable population that supports the economic and banking systems and governing structures.

Now if you, as a central bank, don’t control the value of the currency used by your population, you can no longer control inflation or the safety, stability and soundness of your economic and financial systems. Cryptos are not directly affected by any particular country’s interest rates, at least not more than myriad other factors that send their values swirling.

For a central bank, if the actors involved in valuing and distributing the currency are beyond your control, then you’ve essentially ceded control of monetary policy to those actors and their activities. The system will become susceptible to rapid inflation or deflation. The same unit of cryptocurrency may buy a smartphone today and a sandwich tomorrow. Individuals and businesses will begin to distrust the system, and the economy will suffer.

The potential of centrally backed stablecoins

This is not to say that the technology used by cryptos cannot also be used by central banks to provide, regulate or monitor stable digital currencies for the populations and economies they protect. Central banks and governments, including the U.S. Federal Reserve, are currently exploring central bank digital currencies (CBDCs). Some have worked for several years with the cooperation of commercial banks. The topic is now prominent for many legislators and bureaucrats involved with financial systems. In January 2021, the Office of the Comptroller of the Currency in the United States released regulatory guidance around the use of blockchain technologies in financial systems, and some central banks have already established proof-of-concept activities with central banks. The most recent U.S. Presidential Executive Order, and the two bills recently introduced in the U.S. House of Representatives and the U.S. Senate, also demonstrate the concern governments have about digital assets and currencies, and they attempt to standardize definitions and protections. Many believe it’s only a matter of time before currency does become purely digital.

CBDCs would behave differently from the most popular existing cryptos: if they are directly tied to a fiat currency, like a “stablecoin,” their value remains as stable as the fiat currency. If they can easily be traced from user to user, they are no longer viable for money laundering, underground economic behaviors, or the financing of other illicit activities. While the usage of cryptos for illegal purposes is perhaps overstated (usage of cash for criminal activities is still more prevalent), there is a considerable appeal for central banks and governments in luring away legitimate crypto users and devaluing less traceable cryptos.

When you take away the pundit opinions, the recent Executive Order from the U.S. President is really asking for some thoughtful consideration of how digital assets should be regulated. With respect to cryptos, governments are in a tricky position: since so many people have invested (some of them their life savings) in cryptos or other digital assets, governments now have to consider how to protect their citizenry. If governments do nothing to regulate the cryptos market, and they instruct or allow central banks to issue their own CBDCs, the resulting impact on cryptos could be catastrophic for some parties, and could have an impact on the economy as a whole. That impact could sway the electorate to make certain decisions about a government they don’t see protecting them (even if from themselves). If the government does regulate cryptos with a heavy hand, and the valuations subsequently decline, the impact to individuals and the economy could be similarly catastrophic and electorate-swaying.

Notwithstanding the regulatory issues regarding cryptos, banks could gain other benefits by tracking currency flows and usage. Certainly, it could help the central banks’ objectives of monitoring and influencing economic growth.

How will this affect the current crop of several thousand cryptocurrencies? Only time will tell. If you like speculative, risky investments, cryptos may be for you, but choose carefully. The day may come when the actions of those with the mandate to protect their sovereign economies and markets will render some cryptos irrelevant or of limited value, and only good as relics for hobbyists and historians.

The IBM Payments Center™ helps financial institutions and businesses modernize their payments platforms and capabilities to reduce their infrastructure costs. Find out what they can do for you.

Source: ibm.com

Tuesday, 19 April 2022

Eagle’s quantum performance progress

IBM Quantum announced Eagle, a 127-qubit quantum processor based on the transmon superconducting qubit architecture. The IBM Quantum team adapted advanced semiconductor signal delivery and packaging into a technology node to develop superconducting quantum processors.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs

Quantum computing hardware technology is making solid progress each year. And more researchers and developers are now programming on cloud-based processors, running more complex quantum programs. In this changing environment, IBM Quantum releases new processors as soon as they pass through a screening process so that users can run experiments on them. These processors are created as part of an  IBM Quantum is constantly at work improving the performance of quantum computers by developing the fastest, highest-quality systems with the most number of qubits. Read more.agile hardware development methodology, where multiple teams focus on pushing different aspects of processor performance — scale, quality, and speed — in parallel on different experimental processors, and where lessons learned combine them in later revisions.

Today, IBM Quantum’s systems include our 27-qubit Falcon processors, our 65-qubit Hummingbird processors, and now our 127-qubit Eagle processors. At a high level, we benchmark our three key performance attributes on these devices with three metrics:

◉ We measure scale in number of qubits,

◉ Quality in Quantum Volume,

◉ And speed in CLOPS, or circuit layer operations per second.

The current suite of processors have scales of up to Eagle’s 127 qubits, Quantum Volumes of up to 128 on the Falcon r4 and r5 processors, and can run as many a 2,400 Circuit Layer Operations per second on the Falcon r5 processor Scientists have published over 1,400 papers on these processors by running code on them remotely on the cloud.

During the five months since the initial Eagle release, the IBM Quantum team has had a chance to analyze Eagle’s performance, compare it to the performance of previous processors such as the IBM Quantum Falcon, and integrate lessons learned into further revisions. At this most-recent APS March, we presented an in-depth look into the technologies that allowed the team to scale to 127 qubits, comparisons between Eagle and Falcon, and benchmarks of the most recent Eagle revision.

Multi-layer wiring & through-silicon vias

As with all of our processors, Eagle relies on an architecture of superconducting transmon qubits. These qubits are anharmonic oscillators, with anharmonicity introduced by the presence of Josephson junctions, or gaps in the superconducting circuits acting as non-linear inductors. We implement single-qubit gates with specially tuned microwave pulses, which introduce superposition and change the phase of the qubit’s quantum state. We implement two-qubit entangling gates using tunable microwave pulses called cross resonance gates, which irradiate the control qubit at the at the transition frequency of the target qubit. Performing these microwave activated operations requires that we be able to deliver microwave signals in a high-fidelity, low-crosstalk fashion.

Eagle’s core technology advance is the use of what we call our third generation signal delivery scheme. Our first generation of processors comprised of a single layer of metal atop the qubit wafer and a printed circuit board. While this scheme works well for ring topologies — those where the qubits are arranged in a ring — it breaks down if there are any qubits in the center of the ring because we have no way to deliver microwave signals to them.

Our second generation of packaging schemes featured two separate chips, each with a layer of patterned metal, joined by superconducting bump bonds: a qubit wafer atop an interposer wafer. This scheme lets us bring microwave signals to the center of the qubit chip, “breaking the plane,” and was the cornerstone of the Falcon and Hummingbird processors. However, it required that all qubit control and readout lines were routed to the periphery of the chip, and metal layers were not isolated from each other.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs
A comparison of three generations of chip packaging.

For Eagle, as before, there is a qubit wafer bump-bonded to an interposer wafer. However, we have now added multi-layer wiring (MLW) inside the interposer. We route our control and readout signals in this additional layer, which is well-isolated from the quantum device itself and lets us deliver signals deep into large chips.  

The MLW level consists of three metal layers, patterned, planarized dielectric between each level, and short connections called vias connecting the metal levels. Together these levels let us make transmission lines that are fully via fenced from each other and isolated from the quantum device. We also add through-substrate vias to the qubit and interposer chips.

In the qubit chips, these let us suppress box modes, which is sort of like the microwave version of a glass vibrating when you sing a certain pitch inside of it. They also let us build via fences — dense walls of vias — between qubits and other sensitive microwave structures. If they are much less than a wavelength apart, these vias act like a Faraday cage, preventing capacitive crosstalk between circuit elements. In the interposer chip, these vias play all the same roles, while also allowing us to get microwave signals up and down from the MLW to wherever we want inside of a chip.

Reduction in crosstalk


“Classical crosstalk” is an important source of errors for superconducting quantum computers. Our chips have dense arrays of microwave lines and circuit elements that can transmit and receive microwave energy.  If any of these lines transmit energy to each other, a microwave tone that we apply and intend to go to one qubit will go to another.

However, for qubits coupled by busses to allow two-qubit gates, the desired quantum coupling from the bus can look similar to the undesirable effects of classical crosstalk. We use a method called Hamiltonian tomography in order to estimate the amount of effects from the coupling bus and subtract them from the total effects, leaving only the effects of the classical crosstalk. 

By knowing the degree of this classical crosstalk, for coupled qubits (that is, linked neighbors) we can then even use a second microwave “cancellation” tone on the target qubit during two-qubit gates to remove some of the effects of classical crosstalk. In other circumstances it is impractical to compensate for this and classical crosstalk increases the error rate of our processors — often requiring a new processor revision.

MLW and TSVs provide Eagle with a natural shielding against crosstalk. We can see in the figure, below, that despite having many more qubits and a more complicated signal delivery scheme, a much smaller fraction of qubits in Eagle have high crosstalk than they do in Falcon, and the worst case crosstalk is substantially smaller. These improvements were expected. In Falcon, without TSVs, wires run through the chip and can easily transfer energy to the qubits. With Eagle, each signal is surrounded by metal between the ground plane of the qubit chip and the ground plane of the top of the MLW level.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs
The amount of classical crosstalk on Falcon versus Eagle. Note that this is a quantile plot. The median value is at x=0.

For the qubits with the worst cross-talk, we see the same qubits have the worst cross-talk on two different Eagle chips. This is exciting as it suggests the worst cross-talk pairs are due to a design problem we haven’t solved yet — which we can correct in a future generation.

While we’re excited to make strides in handling crosstalk, there are challenges yet for the team to tackle. Eagle has 16,000 pairs of qubits. Finding those qubits with crosstalk of >1% would take a long time, about 11 days, and crosstalk can be non-local, and thus may arise between any of these qubit pairs. We can speed this up by running crosstalk measurements on multiple qubits at the same time. But ironically, that measurement can get corrupted if we choose to run in parallel on two pairs that have bad crosstalk with one each other. We’re still learning how to take these datasets in a reasonable amount of time and curate the enormous amount of resulting measurements so we can better understand Eagle, and to prepare ourselves for our future, even larger devices.

Measurements


Although we measure composite “quality” with Quantum Volume, there are many finer-grain metrics we use to characterize individual aspects of our device performance and guide our development, which we also track in Eagle.

Superconducting quantum processors face a variety of errors, including T1 (which measures relaxation from the |1⟩ to the |0⟩ state, Tϕ (which measures dephasing — the randomization of the qubit phase), and T2 (the parallel combination of T1 and Tϕ). These errors are caused by imperfect control pulses, spurious inter-qubit couplings, imperfect qubit state measurements, and more. They are unavoidable, but the threshold theorem states that we can build an error-corrected quantum computer provided we can decrease hardware error rates below a certain constant threshold. Therefore, a core mission of our hardware team is to improve the coherence times and fidelity of our hardware, while scaling to large enough processors so that we can perform meaningful error-corrected computations. 

A core mission of our team is to improve the coherence times and fidelity of our hardware, while scaling to large enough processors to perform meaningful error-corrected computations.

Our initial measurements of Eagle’s T1 lagged behind the T1s of our Falcon r5 processors. Therefore, our two-qubit gate fidelities were also lower on Eagle than on Falcon. However, at the same time we were designing and building Eagle, we learned how to make a higher-coherence Falcon processor: Falcon r8. 

A wonderful example of the advantages of working on scale and quality in parallel then arose; we were able to incorporate these changes into Eagle r3, and now see the same coherence times in Eagle as Falcon r8. We expect that improvements in two-qubit gate fidelities to follow. One continuing focus of our studies into Eagle is performance metrics for readout. Two parameters govern readout: χ, the strength of the coupling of the qubit to the readout resonator, and κ, how quickly photons exit the resonator.

There are tradeoffs involved in selecting each of these parameters, and so the most important goal in our designs is to be able to accurately pick them, and make sure they have a small spread across the device. At present, Eagle is consistently and systematically undershooting Falcon devices on χ, as shown in the figure below — though we think we have a solution for this, planned for future revisions. Additionally, we are seeing a larger spread in κ in Eagle than in Falcon, where the highest-κ qubits are higher, and lowest-κ qubits are lower. We have identified that a mismatch between the frequency of the Purcell filter and the readout resonator frequency may be at play — and our hardware team is at work making improvements on this front, as well.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs
1) A comparison of kappa and chi values for a Falcon (Kolkata) and Eagle (Washington) chip.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs
2) A comparison of kappa and chi values for a Falcon (Kolkata) and Eagle (Washington) chip.

We see the first round of metrics as a grand success for this new processor. We’ve nearly doubled the size of our processors all while reducing crosstalk, improving coherence time; and readout is working well, with improvements left that we understand. Eagle also has improved measurement fidelity over the latest Falcons, with the caveat that the amount of time that measurement takes is different between the processors.

Outlook


Eagle demonstrates the power of applying the principles of agile development to research — in our first iteration of the device, we nearly doubled the processor scale, and made strides toward improved quality thanks to decreased crosstalk. Of course, we’re only just starting to tune the design of this processor. We expect that in upcoming revisions we will see further improvements in quality by targeting readout and frequency collisions.

All the while, we’re making strides advancing quantum computing overall. We’ve begun to measure coherence times of over 400 microseconds on our highest-performing processors — and continue to push toward some of our lowest-error two-qubit gates, yet.

We’ve begun to measure coherence times of over 400 microseconds on our highest-performing processors.

IBM Quantum is committed to following our roadmap to bring a 1,121 qubit processor online by 2023, continuing to delivering cutting-edge quantum research, and to pushing forward on scale, quality, and speed to deliver the best superconducting quantum processors available. Initial measurements of Eagle have demonstrated that we’re right on track.

Source: ibm.com

Sunday, 17 April 2022

IBM continues advancing disease progression modeling and biomarkers research using the latest in AI

New research by IBM and JDRF published in Nature Communications advances AI’s ability to better predict onset of Type I diabetes.

IBM Career, IBM Skills, IBM Jobs, IBM Certification, IBM Tutorial and Material, IBM Preparation

Type 1 diabetes (T1D) is an autoimmune disease that can strike both children and adults. It can leave those afflicted with difficult, life-long disease management issues and potentially devastating long-term complications such as kidney failure, heart attack, stroke, blindness, and amputation.

There is no prevention or cure for this disease, and rates of T1D have risen steadily over recent decades, making research on prevention and early detection increasingly critical.

Last year, IBM Research highlighted previous related research conducted in collaboration with JDRF and five academic research sites which form the T1DI Study Group: in the US (DAISY, DEW-IT), Sweden (DiPiS), Finland (DIPP), and Germany (BABYDIAB/BABYDIET). That work advanced insights into development of biomarkers associated with the risk of T1D onset in children and ultimately found that the number of islet autoantibodies present at seroconversion, the earliest point in autoimmunity development, can reliably predict risk of T1D onset in young children for up to 10 to 15 years into the future.

Now, IBM Research has another important milestone in this field of research. This week, Nature Communications published new research by the T1DI Study Group which shows that the progression of Type 1 diabetes from the appearance of islet autoantibodies to symptomatic disease is predicted by distinct autoimmune trajectories. In this work, we added a unique data visualization tool – DPVis – to the AI and machine-learning tools IBM Research has been developing for disease progression modeling (DPM Tools). This has allowed us to unlock entirely new insights from the study data that may ultimately help refine how we understand the impact of islet autoantibodies on the development of T1D, thereby improving our ability to predict onset of the disease. 

These new tools allowed us to unlock entirely new insights from the study data that may ultimately help refine how we understand the impact of islet autoantibodies on the development of T1D.

As we previously showed, the presence of multiple islet autoantibodies at seroconversion increase risk of T1D, but they may not occur consistently over time — and a patient may have different combinations of antibodies at different points in time. Our previous research showed that the implications of these changes were unclear, so we set out to address this by analyzing the complex patterns of antibodies that occur over time instead of at a single point in time. In doing so, we identified three distinct trajectories, or “pathways,” associated with varying degrees of risk, each of which was comprised of multiple distinct states.

The combination of AI and data visualizations made it possible to put researchers in the loop for this large-scale, long-term collaboration by providing a common framework for our evolving understanding of how biomarkers influence a patient’s journey toward disease onset.

This research could eventually make it easier to identify at-risk children whose families can learn about the symptoms of T1D, allowing early diagnosis. Unfortunately, many today are not diagnosed until they have progressed to diabetic ketoacidosis, a life-threatening condition with potential long-term negative health effects. In addition, at-risk children can participate in clinical trials focused on delaying, and possibly preventing, the onset of T1D.

Broader efforts on disease progression modelling and biomarker discovery


Our work on T1D is just one part of a broader mission at IBM Research to develop AI technologies such as the DPM Tools to advance scientific discoveries in healthcare and life sciences. In addition to JDRF, we collaborated with other foundations that brought in deep commitment and scientific expertise, notably CHDI Foundation for Huntington’s disease.

Symptoms of Huntington’s typically begin to manifest between ages 30 and 50 and worsen over time, ultimately resulting in severe disability. While there is currently no treatment available to slow the progression of the disease, there are medications that are used to treat specific symptoms. Unfortunately, most have side effects that can have a negative impact on quality of life for patients with Huntington’s.

CHDI and IBM have been engaged in collaborative research along with other academic institutions for several years, addressing multiple research questions that have spanned disease progression modeling, brain imaging, brain modeling, and molecular modeling.

Most recently, this collaboration created another important paper, describing nine disease states of clinical relevance discovered using our suit of DPM Tools. The ability to derive characteristics of disease states and probabilities of progression enabled by models like these could accelerate drug development through the discovery of novel biomarkers and improved clinical trial design and participant selection.

This work was published in Movement Disorders the official journal of the International Parkinson’s and Movement Disorder Society, and the significance of our work was highlighted in a Nature Reviews Neurology “In Brief” section. We also carried out similar work on Parkinson’s disease in collaboration with the team at the Michael J. Fox Foundation which was published in The Lancet Digital Health.

While the specifics vary, these three conditions all share certain characteristics, specifically a profound, life-long impact on the lives of patients and their families and the fact that they are all highly complex conditions, with progression pathways that are difficult to assess, characterize and predict.

Looking to what’s ahead


Modern science is a team effort, but nowhere is this truer than in healthcare where breakthroughs in understanding disease and developing new treatments, require collaboration among research teams across numerous disciplines.

IBM has been actively convening, coordinating, or participating in such inter-disciplinary teams, resulting in clinically important findings and high-impact journal publications. The three projects highlighted here stand out as exemplars of our approach to scientific discovery through collaboration with communities of discovery. Through this work, we have established an ecosystem of reusable methodologies, models and datasets that is already being applied to study other pathologies and that will be used to broaden the scope of our work in the future.

Source: ibm.com

Saturday, 16 April 2022

What asset-intensive industries can gain using Enterprise Asset Management

IBM Exam Study, IBM Learning, IBM Career, IBM Jobs, IBM Skills

Not that long ago, asset-intensive organizations took a strictly “pen and paper” approach to maintenance checks and inspections of physical assets. Inspectors walked along an automobile assembly line, manually taking notes in an equipment maintenance log. Teams of engineers performed close-up inspections of bridges, hoisting workers high in the air or sending trained divers below the water level. Risks, costs and error rates were high.

Enter the era of the smart factory floor and smart field operations. Monitoring and optimizing assets requires a new approach to maintenance, repair and replacement decisions. Work orders are now digitized and can be generated as part of scheduled maintenance, thanks to connected devices, machinery and production systems.

But challenges remain. How do enterprise-level organizations manage millions of smart, connected assets that continuously collect and share vast mountains of data? And what is the value of managing that data better? The answer lies with migrating from siloed legacy systems to a holistic system that brings information together for greater visibility across operations.

This is where Enterprise Asset Management (EAM) comes in. EAM is a combination of software, systems and services used to maintain and control operational assets and equipment. The aim is to optimize the quality and utilization of assets throughout their lifecycle, increase productive uptime and reduce operational costs.

A March 2022 IDC report detailed the business value of IBM Maximo®, one of the most comprehensive and widely adopted enterprise asset management solution sets on the market. Interviewed companies realized an average annual benefit of $14.6 million from Maximo by:

◉ Improving asset management, avoiding unnecessary operational costs, and increasing overall EAM efficiency

◉ Improving the productivity of asset management and field workforce teams using best available technology

◉ Enabling the shift from legacy/manual processes to more streamlined operations via automation and other features

◉ Supporting business needs by minimizing unplanned downtime and avoiding disruptive events and asset failure, while improving end-user productivity and contributing to better business results

◉ Supporting business transformation from scheduled maintenance to condition-based maintenance to predictive maintenance

EAM on the smart factory floor

In a smart factory setting, EAM is an integrated endeavor where shop floor data using AI and IOT can come together to reduce downtime by 50%, reduce breakdowns by 70% and reduce overall maintenance cost by 25%.

These wide-ranging benefits can be seen in action at Toyota’s Indiana Assembly, where a new car rolls off the assembly line every minute, and each process in the vehicle’s assembly must be flawless and it’s become business critical to have zero downtime and defects.

See how Toyota is using IBM Maximo Health and Predict to create a smarter, more digital factory.

Smarter infrastructure for bridges, tunnels and railways

In the U.S. alone there are more than 600,000 bridges. One in three of these bridges is crumbling and in need of repair due to structural cracks, buckled or bent steel, rust, corrosion, displacement or stress.

Sund & Bælt, headquartered in Copenhagen, Denmark, owns and operates some of the largest infrastructures in the world, including the Great Belt Fixed Link, an 11-mile bridge that connects the Danish islands of Zealand and Funen. To inspect bridges, the company often hired mountaineers to scale the sides and take photographs for examination. This kind of inspection could take a month, and the process had to be repeated frequently.

Seeking a next-generation EAM approach, Sund & Bælt collaborated with IBM to create an AI-powered IoT solution, IBM Maximo for Civil Infrastructure, that uses sensors and algorithms to help prolong the lifespan of aging bridges, tunnels, highways and railways. By automating more of its inspection work, the company is on track to increase productivity by 14 – 25% and reduce time-to-repair by over 30%.

These anticipated benefits mirror the strong business value of IBM Maximo displayed in the IDC report, which projects the average organization will realize an average annual benefit of $1.3 million per 100 maintenance workers, resulting in an average five-year ROI of 450%.

Source: ibm.com

Thursday, 14 April 2022

How to prioritize data strategy investments as a CDO

IBM Chief Data Officer (CDO), IBM, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Career, IBM Skills, IBM Jobs, IBM CDO, IBM Preparation

My first task as a Chief Data Officer (CDO) is to implement a data strategy. Over the past 15 years, I’ve learned that an effective data strategy enables the enterprise’s business strategy and is critical to elevate the role of a CDO from the backroom to the boardroom.

Understand your strategic drivers

A company’s business strategy is its strategic vision to achieve its business goals. Data that can be managed, protected, and monetized effectively will provide insights into how to achieve those goals. A CDO works in collaboration with senior executives to steer a business to its strategic vision through a data strategy.

Strategy environments contain complex moving parts, points of view, and competing needs, all working toward three goals:

◉ Growing the top line by improving revenue growth

◉ Expanding the bottom line by making operations more efficient

◉ Mitigating risk

A CDO’s priority is not just to learn the strategic needs of the business and senior leadership, but also to implement a data strategy that helps leaders achieve their goals faster and embrace data as a competitive advantage. When prioritization of these goals is decided and agreed upon by all, the enterprise can more easily achieve true alignment, resulting in a collaborative, data-driven environment.

Strategic alignment also ensures that competing day-to day responsibilities will not challenge the CDO role. Quick wins and fighting fires are a part of the job, but it is only when they are in service of an enterprise-wide supported strategy that a CDO will have a comprehensive impact on a business.

The evolution of IBM’s Global Chief Data Office (GCDO) strategy

When I joined IBM in 2016, our business strategy centered on hybrid cloud and AI. As a result, I could align and evolve our data strategy with that focus going forward. For example, how do we grow revenue in AI if many leaders don’t fully understand what an AI enterprise looks like?

At IBM, we embarked on a data strategy to transform the enterprise into a data-first business, with AI infused into every key business process. Our data insights sharpened our definition of what AI meant to an enterprise, which also fed directly back into our business strategy. Thus IBM, itself, served as client zero and became a showcase of our solutions in the data and AI space.

In terms of implementing that strategy, we set several key pieces in place, such as:

◉ A hybrid distributed cloud and AI platform

◉ A robust understanding of our DataOps pipeline

◉ Governance with a focus on transparency to instill trust

◉ A data-literate culture

All these pieces worked together to set us up for a successful strategy pivot in 2021.

IBM’s data strategy aligns to revenue growth in 2021

Focus

From 2019 to 2021, our GCDO successfully aligned our data strategy to top-line growth with a strong focus on revenue. Sharpening our focus resulted in more than doubling our contributions to enable revenue growth, for our product sales and consulting teams, over the last three years. This year – 2022 – we are on track to increase our contribution by 150%. These additional revenues accrue to all our major brands and channels: hardware, software, business services and ecosystem.

Align

Aligning our data strategy to support revenue and profit was a smooth transition because our central GCDO acts as an extension of all lines of IBM’s businesses and functions.  We have assigned data officers throughout all major business units, and we meet with them regularly to ensure strategic alignment.

Discover

Another part of our pivot was an education and mindset shift to design thinking. We worked directly with the people involved in the end-to-end process, an often underused step to transformation. The power of design thinking surfaced pain points directed to data, and it provided new opportunity benefits that rippled out to teams across the enterprise.

2022 and the future of IBM data strategy investments 

 Our future data strategy will maintain a foundation of top-line revenue focus. Looking forward we are excited to additionally focus on using the power of a data fabric, improving user experience, and tapping deeper into our ecosystem of partnerships.

Data fabric and user experience

A data fabric architecture is the next step in the evolution of normalizing data across an enterprise. Its untapped potential provides an exciting opportunity to expand within our own data efficiencies and strategy.

A data fabric is an architectural approach that automates data discovery and data integration, streamlines data access and ensures compliance with data policies regardless of where the data resides. A data fabric leverages AI to continuously learn patterns in how data is transformed and used, and uses that understanding to automate data pipelines, make finding data easier, and automatically enforce governance and compliance. By doing so, the data fabric significantly improves the productivity of data engineering tasks, accelerates time-to-value for the business, and simplifies compliance reporting.

It is an exciting time for the future of data. We can now mine the capabilities of a data fabric architecture to provide a more positive user experience that gets data into the hands of those who need it most with trust, transparency, and agility. The significance of a data fabric architecture is magnified with the emergence of the virtual enterprise and ecosystem partnerships.

Ecosystem partnerships and IBM as a living lab

In 2022, the IBM GCDO strategy also includes an increasing attention to our business partnerships, a growing space in the data field. Leveraging an ecosystem of partners with complimentary technologies can bring solutions to clients faster.

Additionally, the massive, heterogeneous, and innovative environment at IBM allows our GCDO to focus on solutions as part of a living lab. Acting as our own power user, we can test our solutions at scale to consistently provide a roadmap of insights back into our own products and partnerships. Our current partnership with Palantir showcases how we do this at scale.

Data and leadership as an ongoing conversation

When data strategy is prioritized, data can govern processes as well as augment the leadership experience. As a CDO whose role is that of change agent in the enterprise, I will continue to shape strategic conversations with leadership. And as we move further into 2022, our data strategy investments will continue to evolve alongside our offerings.

Source: ibm.com

Tuesday, 12 April 2022

Women on the verge of transformational leadership

IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Tutorial and Materials, IBM Career, IBM Jobs, IBM Skill

When IBM launched its first women in leadership study in 2019, the research revealed a persistently glaring gender gap in the global workplace. At that time, only 18% of senior leadership positions worldwide were held by women. And only 12% of organizations surveyed were going above and beyond their peers to address it head on by formalizing the advancement of women as a top ten business priority.

In the past two years, many organizations have seen an exodus of women during the Great Resignation and have felt the velocity of digital transformation. These shifts have been juxtaposed with an entrance of social and cultural awareness to the forefront of the corporate world. But for those who stay, the pipeline for women’s leaders has expanded. That’s the mixed news delivered by IBM’s Institute for Business Value (IBV) in new follow-up data from U.S.-based organizations.

This geography-specific update to the 2021 women in leadership study sought to understand if women’s leadership opportunities had advanced in the United States in the wake of the great digital shift.

A pipeline of plenty in the U.S.

Though the percentage of U.S. women in the C-suite and on executive boards have essentially remained flat, women have stepped into the talent pipeline over the past 12 months, an overall increase of 40%, according to the IBV. Those women’s chances of advancement are higher if their employer is led by a woman.

Enabled by more flexible work policies, and a bring-your-whole-self to work ethos, some 5 million executive women in the United States now stand ready to lead corporate America. They’re finding they can be evaluated on their own merits. unencumbered by the pre-pandemic norms that favored in-person networking and longer hours at the office or with clients.

When given prime opportunities to lead, women get results. In fact, companies where women lead perform better financially, finds a McKinsey survey, generating up to 50% higher profits. The 12% identified in IBM’s 2019 survey as the most committed to driving change agree. They all view gender inclusivity as a driver of financial performance, compared to only 36% of other organizations surveyed.

Women remain underrepresented at the top. Let’s do something to change that.

Though the leadership pipeline of women has grown in a variety of professional, managerial and SVP roles, the newest IBM research warns that many are still not breaking into the C-suite or executive board. Startlingly, fewer women hold top C-suite and executive board positions in 2022 than they did in 2021, dropping by roughly a point.

But those few who do end up in a senior seat are driven to change the status quo. In fact, 72% of women CEOs vs. 16% of male counterparts assert that “advancing more women into leadership roles in our organization is among our top formal business priorities.”

But the job of lifting women up can’t be given to this small population of women CEOs alone. A failure for organizations to take concerted actions could lead gender parity to move at a “snail’s pace” worldwide. At the current rate of change, the global gender gap will not close for 135 years, as estimated by the World Economic Forum. A persistent trend of gender pay disparity, which alarmingly gets worse as women age, could have drastic economic impacts as well — especially as the number of single mothers continues to increase.

With so much distance to cover, organizations need to apply the same attention to equal advancement that they reserve for other organizational goals. It’s time to close the gap between awareness and action on equality in position and pay.

How the demand for AI skills shifts the gender equation

The digital transformation of the wider workplace is expanding demand in the technical sector. The changing nature of work has resulted in a surge in demand for AI and machine learning skills across all industries. The number of data scientist and engineering roles in particular has grown an average of 35% annually, according to LinkedIn’s 2021 U.S. Emerging Jobs Report.

This proliferation of technology jobs has opened a wider job market. That has meant more opportunities for women, even for those who possess little technology backgrounds, to jump-start lucrative careers. But while these technical fields have become easier for women to enter, advancement hasn’t been as fast as in other sectors, with only 52 women promoted for every 100 men, according to a 2021 McKinsey report coauthored with LeanIn.Org.

The report points out that mentors and sponsors can help women in technical roles connect with experienced colleagues, develop their leadership acumen, and hone their communication skills. Direct managers can play important role too.

One such example is Gabriela de Queiroz, an open-source advocate and IBM chief data scientist who manages a team of data scientists and AI experts at IBM, six out of seven whom are women. She’s committed to upholding IBM’s tradition of maintaining diversity in an industry where women have traditionally been underrepresented.

To help her data-savvy reports become valuable contributors to the business, de Queiroz looks to help the six women on her team better convey the business value of data through storytelling.

When de Queiroz speaks in public, she brings a junior team member to deliver a presentation alongside her to make sure they build the necessary confidence they’ll need to advance their careers. “My whole pitch is, give back. This is a cycle.”

What your organization can do differently right now

From hiring managers and HR to public policymakers, it’s time to take action and confront facets of gender inequality that impact both the home and the workplace.

Working mothers in the U.S. face potential burnout as they face a lack of flexible work arrangements. Demanding careers and an inability to take significant paid leave are two reasons why U.S. mothers are more likely than fathers to quit their careers while still in their prime. Another is a lack of available childcare, according to a survey published by the Pew Research Center.

As companies look to bring staff back into the office, they can consider the present obstacles that could prevent women from stepping into leadership roles. Some companies have gone the extra mile with fully-paid gender-neutral parent leave, phase-back after parental leave and backup childcare. Work hour and location flexibility is also important. Such efforts have landed IBM (ranked No.1) on Comparably’s list of the best large companies for women in 2021.

Digital transformation itself can play a role in elevating HR to be more strategic, agile and responsive. A recent McKinsey report calls out self-service platforms that give line managers more responsibility for recruiting and more flexibility on skills evaluation. The automation of traditional administrative tasks can allow HR to more thoroughly collect and analyze employee data, including the experience of women or women’s networking groups, to make more informed decisions.

There’s a lot of work ahead to ensure equal advancement in the workplace. Women have made up 51.1% of the population since 2013 and outnumber men in most U.S. states. It’s well past time that organizations look more like the world we live in.

Businesses everywhere have entered a new era of digital reinvention, fueled by innovations in hybrid cloud and artificial intelligence. IBM is uniquely positioned to help our clients succeed in this radically changed business landscape by partnering with them to deliver on five levers of digital advantage: shape and predict data-driven outcomes, automate at scale for productivity and efficiency, secure all touchpoints all the time, modernize infrastructures for agility and speed and transform with new business strategies, open technologies, and co-creation.

Source: ibm.com