Tuesday, 17 May 2022

IBM Engineering Workflow Management is the tool of choice for the IBM zHW team

IBM Z, IBM Exam Prep, IBM Career, IBM Jobs, IBM Skills, IBM Learning, IBM Preparation

Imagine a digital engineering workplace where thousands of people are building a single system. This system is used by two-thirds of the Fortune 100, 45 of the world’s top 50 banks, 8 of the top 10 insurers, 7 of the top 10 global retailers and 8 out of the top 10 telcos as a highly secured platform for running their most mission-critical workloads. The development effort involves coordinating manufacturing, chip design, hardware, firmware, testing and defect tracking, while also meeting stringent regulatory requirements across a variety of industries and governmental standards.

This is the challenge that teams working on IBM Z® (the family name for IBM’s z/Architecture mainframe computers) face with every new product release.

Looking to the future

The IBM Z developers embarked on an extensive year-long evaluation of the development tools available in the marketplace that would help them manage their daunting engineering workflow. To carry out this evaluation, stakeholders created a matrix chart showing which solutions included the required integration capabilities for the tools used by the team.

The team selected components of the IBM Engineering Lifecycle Management (ELM) solution, namely IBM Engineering Workflow Management (EWM), a fully integrated software development solution designed for complex product management and engineering, as well as for large, distributed development organizations that produce mission-critical systems subject to regulatory compliance. But this choice was not a foregone conclusion.

“By being completely objective and allowing the criteria and data to do the talking, we were led to EWM,” said Dominic Odescalchi, project executive manager of IBM zHW program management. “EWM was the consensus tool that we collectively agreed upon to provide the best solution.”

The advantage of IBM ELM tools

The zHW platform development team will leverage EWM as the central hub of engineering data, taking advantage of the customization capabilities within the broader ELM solution. This way, every team can adapt the processes that fit them best while remaining coordinated across one view of the development data and progress. Managing this data is critically important given the highly regulated workloads that are run on these systems across a variety of industries, governmental agencies and countries.

Given the holistic design of IBM Engineering Lifecycle Management, the team has also adopted the IBM Engineering Test Management tool to manage the comprehensive verification and validation of the hardware, again leveraging the one view and traceability across development data.

“With EWM’s integrated tool stack, key data will be readily available through connection to various team repositories,” said Odescalchi. “This will enable us to kick the doors wide open to automating and aggregating data. It’s going to free up countless hours to focus on performing higher value activities.”

Source: ibm.com

Saturday, 14 May 2022

How is SWIFT still relevant after five decades?

IBM, IBM Exam, IBM Exam Study, IBM Exam Prep, IBM Learning, IBM Career, IBM Skills, IBM Jobs, IBM Tutorial and Material

Too many people in the payments industry today hold the misconception that the SWIFT network is only for cross-border payments. This was indeed the case in 1973, when 239 banks from 15 countries joined forces to create an efficient, automated, and secure payments network.

At launch, the Society for Worldwide Interbank Financial Telecommunication (SWIFT) was built on three pillars, a secure and reliable communication protocol, a set of message standards and continuous new services aligning with its members’ needs.

A global network with a global reach

These pillars remain just as relevant 48 years later. So much more than a “cross-border payments network,” SWIFT has grown to serve more than 11,000 members in over 200 countries, providing a wide array of financial messaging services and influencing and innovating payments worldwide.

SWIFT has reimagined domestic high-value payments. Over 60 market infrastructures, covering 85 countries, rely on the SWIFT network to clear and settle domestic transactions. SWIFT FileAct, a bulk message exchange, allows correspondents to send and receive files mostly used to exchange bank statements or to exchange low-value, high-volume transactions.

SWIFT has extended the network to non-bank financial institutions, allowing the exchange of securities, foreign exchange and all other types of financial messages needed by its members. In fact, today more than 50% of messages on the SWIFT network involve securities trade transactions.

As the network expanded to cover most financial institutions worldwide, SWIFT opened its doors to large corporations such as Microsoft and GE. SWIFT became these companies’ single standardized connection to all their banks, adding efficiency and cost savings to treasuries worldwide.

Message standards and worldwide influence

From its inception, one of the key pillars of the SWIFT network is the message standard, a common language understood and processed by all its members.

The ISO 15022 standard, more commonly known as the SWIFT MT or Message Type standard, was introduced in 1995. It was similar in structure to that of the Telex technology that the network replaced. While it has evolved over the years, the MT standard remains the most used message format on the SWIFT network, and it has made its way to many domestic and private networks worldwide.

In the late 90s, SWIFT realized that the MT standard, although very useful, was restrictive in light of evolving technologies. SWIFT MT would not support the data that will be needed to be processed with each transaction. In 1999, SWIFT decided to adopt XML and develop a message standard that would recognize the richness of the data. ISO 20022 is often referred to as the “new standard.” But it was actually launched in 2004 in collaboration with SWIFT members. The new standard had some issues catching on, since adoption was voluntary and required heavy investment in backend systems,  But on the heels of a 2019 mandate, ISO 20022 is now being deployed in every major network worldwide, and it is the foundation for interoperability.

Security

Given its prominence in finance, SWIFT has become a prime target for hackers. In fact, multiple hacking collectives have targeted the SWIFT network in attempts to divert funds. In 2016, hackers robbed over $80M from a large bank, money that today remains unrecovered. Although the network itself was not breached, SWIFT quickly realized that each of its 11,000 members did not meet industry standard security levels.

To level the playing field, SWIFT launched the Customer Security Program, a set of 27 security controls forcing each member to completely reassess its infrastructure, thereby securing the overall network. Within the first year, 91% of SWIFT members (covering over 99% of volume) had confirmed their compliance with the controls. This shows the influence the SWIFT organization has developed over the years to ensure compliance in a typically slow-paced industry.

Innovation

SWIFT has not rested on its laurels. The network is continuously focused on innovation to improve the member’s experience.

The SWIFT global payment innovation (SWIFT gpi) launched in 2017 with an objective to deliver cross-border payments faster, cheaper and with full transparency and traceability. Following a successful mass adoption of SWIFT gpi, over 90% of wire transactions were credited within 24 hours, including 40% credited within 30 minutes. SWIFT gpi is now extending its capabilities to reduce the number of rejected transactions through pre-validation and to deliver value to corporations looking for transparency of fees and better traceability on inbound and outbound treasury payments. With the launch of SWIFT Go, the foundation piece for real-time cross-border payments, the gpi model is also applied to low-value payments.

In the past 5 years alone, SWIFT has launched several new services and completed multiple proofs-of-concept, ranging from launching the first real-time cross-border payment to assessing the use of blockchain technology as part of the SWIFT network while implementing Financial Crimes and data analytics services.

The future of SWIFT

If you still think SWIFT is “just for cross-border payments” you might need to take a second look. SWIFT is the heart and soul of payments worldwide, and without access to it, any economy could easily collapse.

As a result of recent global events, interest in SWIFT and its functions in financial services has certainly grown. Banks and large corporations have relied on SWIFT for secure messaging since 1973, but it doesn’t often get public visibility. By consistently delivering efficient and secure payments, SWIFT has earned the trust of 11,000+ members. As long as it continues to listen to their needs and collaborate and innovate to provide new value, SWIFT will continue to grow and dominate payments worldwide.

Source: ibm.com

Thursday, 12 May 2022

Digital engineering is the answer when flawless, accountable production means life or death

IBM, IBM Exam, IBM Exam Study, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs

Digital technology transformations have streamlined analog processes for decades, making complicated tasks easier, faster, more intuitive and even automatic. The modern car is the perfect expression of this idea. Cars produced in the last few decades are more than cars — they’re a bundle of digital processes with the ability to regulate fuel consumption, detect unsafe conditions, understand when the vehicle is coming close to a collision and ensure the driver doesn’t unknowingly drift out of their lane.

The array of sensors and actuators, cameras, radar, lidar and embedded computer subsystems in these vehicles can’t just be useful gadgets; they must flawlessly ensure the safety of the driver and passengers. These incredibly complex systems are often developed by different engineering teams or companies. Without the proper development processes, bugs can go unnoticed until after the model ships. For car manufacturers, ensuring that their systems are safe is a matter of life and death.

If a car manufacturer finds a flaw in the self-driving system only after the model has shipped, they face a clear crisis. There isn’t time to contact the dealers, to email drivers or to erect billboards warning of the flaw. The issue must be fixed immediately, or the car manufacturer could face irreparable damage. If the computer system was designed with a firm digital engineering foundation, the manufacturer could easily fix the issue by sending out a “cloud burst” to update every car on the network before the flaw becomes dangerous.

Digital product engineering enables complex, high-stakes development

The goal in digital engineering is to not only minimize flaws in every outgoing vehicle, but to establish a development environment to ensure that once a flaw is detected, it can be fixed quickly and safely. To achieve this, we recommend that companies embrace digital product engineering and digital thread technology. A digital thread is an engineering process whereby a product’s development can be digitally traced throughout its lifecycle, upstream or downstream.

Since the invention of digital technology, businesses have been using computers to automate shipping systems, supply systems and warehouse systems. As the power of that technology continues to grow, businesses are applying the same principles of automation to the development process as well.

Businesses can now create an easy-to-access digital repository for collaborators to work on or view. Updates to the product are made within that central source, ensuring everyone has access to the most up-to-date version of the product.

Digital product engineering is an evolving process, a future-state that organizations need to achieve to make the world a safer, more secure place. Digital engineering holds such promise that the US Government Department of Defense has stipulated in their digital engineering strategy that any subcontractors they work with must use digital engineering processes to ensure transparency, safety and accountability for their high-tech defense systems.

At the highest-level, digital engineering is a holistic, data-first approach to the end-to-end design of complex systems. Models and data can be used and shared throughout the development of the product, eschewing older documents-based methods. The goal is to formalize the development and integration of systems, provide a single authoritative source of truth, improve engineering through technological innovation, and establish supporting engineering infrastructure to ease development, collaboration and communication across teams and disciplines.

Digital thread can provide users with a logic path for tracking information throughout the systems’ lifecycle or ecosystem. By pulling on the digital thread, engineering teams can better understand the impact of design changes, as well as manage requirements, design, implementation and verification. This capability is vital for accurately managing regulatory and compliance requirements, reporting development status and responding quickly to product recalls and quality issues. In terms of digital engineering, a digital thread represents a significant role in connecting engineering data to related processes and people. But a digital thread is not plug and play; it’s a process that must be designed from the ground up.

The IBM digital engineering solution

To make it one step easier for your organization, IBM® Engineering Lifecycle Management (ELM) can establish the ideal base for your company to pursue digital engineering transformation. ELM is built from the ground up around the digital thread model. Each lifecycle application seamlessly shares engineering data with every other lifecycle application, such as downstream software, electronics and mechanical domain applications. ELM leverages the highly-proven W3C linked data approach using Open Services for Lifecycle Collaboration (OSLC) adapters for both internal and external information exchange — the same approach used to seamlessly connect web applications across industries.

ELM leverages OSLC to connect data and processes along the engineering lifecycle. By enabling this standards-based integration architecture, engineering teams can avoid the complications inherent in developing and maintaining proprietary point-to-point integrations.

Lumen Freedom, a manufacturer of wireless charging units for electric vehicles, wants to provide an untethered world for electric vehicle owners. In pioneering this innovation, Lumen’s design management became increasingly complex and difficult to manage. To level up their product development goals, Lumen adopted digital engineering lifecycle management tools from ELM that allow them to capture, trace and analyze mechanical, hardware and software requirements throughout the entire product development process. “Given that DOORS® Next and ELM are essentially standards in the automotive industry, we chose IBM for our preferred toolchain,” says David Eliott, Systems Architect at Lumen Freedom.

ELM maintains a linked data foundation for digital engineering and provides data continuity and traceability within integrated processes. With global data configuration, engineering teams can define a consistent baseline and provide central analytics and reporting components. ELM fosters consistency across all data while providing an automated audit trail, ensuring ease of access to digital evidence for regulatory compliance.

Source: ibm.com

Tuesday, 10 May 2022

How Canada is growing its data economy

IBM Exam, IBM Exam Prep, IBM Learning, IBM Certification, IBM Skills, IBM Jobs, IBM

The data economy is booming. In 2021, IDC estimated the value of the data economy in the U.S. at USD 255 billion, and that of the European Union at USD 110 billion. In these and many other regions, growth in the data economy outpaces GDP. IBM has examined Canada’s particular potential for data leadership, with lessons for any other country hoping to compete in the data economy.

Will we get to CAD 1 trillion value of data in Canada before 2030? In mid-2019, Statistics Canada estimated that Canadian investment in “data, databases and data science” has grown over 400% since 2005. At an upper limit, the value of the stock of data, databases and data science in Canada was $217B in 2018, roughly equivalent to the stock of all other intellectual property products (software, research and development, mineral exploration) and equivalent to more than two-thirds the value of the country’s crude oil reserves.

As the world continues to rapidly change around us, ground-breaking opportunities are presenting themselves that will shift the fundamentals of how businesses, governments and citizens function. This shift will be supported by enormous amounts of data, regardless of the part of society in which these transformations take place.

What is the data economy?

The amount of data throughout the world has almost doubled in just two years, with growth expected to triple by the year 2025. With data’s unprecedented growth, important decisions will have to be made about how to use it; and these decisions will determine the commercial success or failure of the digital revolution.

The data economy is the social and economic value attained from data sharing. While data has no inherent value, its use does. When it is organized, categorized and transformed into information that can drive innovation, solve complex problems, create new products, or provide better services its value becomes apparent.

While data can solve critical challenges in our society, most of its value is inaccessible due to the siloed and fragmented nature of most data ecosystems. Governments cannot develop effective policies; business leaders are unable to fully tap their resources; and citizens are prevented from making informed decisions. Leveraging data to benefit society depends upon the amount of connections that we can form between contributors and consumers, among enterprises and governments. A prosperous data economy must be linked to intelligent governance, administered for the good of everyone.

Why does it matter?

1. Citizens can assume more control of their data, ensuring its appropriate use and security while benefiting from new products and services.

2. Businesses can customize their products to align with their clients and better manage regulations.

3. Governments can collaborate on national and international strategies to achieve optimum effectiveness on a global scale.

And what can it do for you?

The profound implications of well-managed global data exchanges illuminate the vision of a better world, opening the window to myriad possibilities:

◉ Fighting disease through shared research on diagnostics and therapeutics

◉ Identifying global threats and reacting to them quickly

◉ Deploying advanced applications to solve organizational issues, unlocking innovation

◉ Harnessing data to promote environmental health, prevent environmental degradation and protect at-risk ecosystems

◉ Coordinating data to benefit industrial sectors such as tourism or agriculture

IBM Exam, IBM Exam Prep, IBM Learning, IBM Certification, IBM Skills, IBM Jobs, IBM
Canada has the potential to create a world-leading data economy, positioning us to develop innovations that will allow us to compete globally. We have many advantages in our favour: a highly trained workforce strengthened by our skills-based immigration system; our government’s commitment to accountability, security and innovation; and our unique history, geography and public policies.

Our success will depend upon a collective effort to promote engagement and facilitate the transition to a data-driven economy. Together with its financial investment, Canada must focus on cultivating data literacy among its citizens, as businesses increasingly embrace digitized platforms.

Fast-tracked by COVID-19, investment in data science has accelerated, alongside the proliferation of emerging technologies. By leveraging the opportunities in the rising data economy, Canada can unlock a trillion-dollar benefit within the next decade.

Source: ibm.com

Sunday, 8 May 2022

Computer simulations identify new ways to boost the skin’s natural protectors

Working with Unilever and the UK’s STFC Hartree Centre, IBM Research uncovered how skin can boost its natural defense against germs.

IBM, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM AI
As reported in Biophysical Journal, small-molecule additives can enhance the potency of naturally occurring defense peptides. Molecular mechanisms responsible for this amplification were discovered using advanced simulation methods, in combination with experimental studies from Unilever.

When in balance, our skin and its microbiome form a natural partnership that helps to keep our skin healthy and defends against external threats, like pollutants and germs that can cause infections. Disturbances in that partnership (called dysbioses) can lead to imbalances in the microbiome which can also contribute to body odor, skin problems, and in more extreme cases, even lead to medical conditions like eczema (or atopic dermatitis).

In addition to hosting your microbiome, your skin is an immunologically active organ, contributing to your body’s innate immune system with its naturally mildly acidic pH, mechanical strength, lipids, and a natural release by skin cells of protein-like materials called antimicrobial peptides (AMPs). Together, these form the first line of defense against infection causing microbes that land on your skin.

Unilever R&D and its global network of research partners have been investigating the role of skin immunity and AMPs for over a decade. When Unilever needed to develop new ways to understand, at the molecular level, how its products interact with AMPs to enhance skin defense activity, the company turned to IBM Research.

IBM and Unilever — in collaboration with STFC, which hosts one of IBM Research’s Discovery Accelerators at the Hartree Centre in the UK — used high performance computing and advanced simulations running on IBM Power10 processors to understand how AMPs work and translate this knowledge into consumer products that boost the effects of these natural-defense peptides. This work builds upon a long-standing partnership between IBM, Unilever and the STFC Hartree Centre aimed at advancing digital research and innovation.

As we report in Biophysical Journal, our work alongside STFC’s Scientific Computing Department found that small-molecule additives (organic compounds with low molecular weights) can enhance the potency of these naturally occurring defense peptides. Using our own advanced simulation methods, in combination with experimental studies from Unilever, we also identified specific new molecular mechanisms that could be responsible for this improved potency.

IBM, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM AI

Simulating molecular interactions


Although there’s been a lot of research focused on designing new, artificial antimicrobials, Unilever wanted to concentrate on boosting the potency of the body’s naturally occurring germ fighters with small-molecule additives. IBM Research has already developed computational models for membrane disruption and permeation through physical modeling, but Unilever’s challenge was a new area of exploration for us, given the extremely complex nature of having to model how AMPs interact with the skin and calculate which would be the most efficacious.

Several years ago, Unilever scientists in India discovered that Niacinamide, an active form of vitamin B3 naturally found in your skin and body, could enhance AMP expression levels in laboratory models. At the same time Unilever’s team also observed an unexpected enhancement of AMP antimicrobial activity in cell-free systems, and wanting to understand why this enhanced activity was happening — a research collaboration between Unilever, IBM, and STFC was initiated.

To answer Unilever’s question we developed computer simulations to investigate how single molecules interact with bacterial membranes at the molecular scale to demonstrate the fundamental biophysical mechanisms in play. These models then formed the basis of more complex simulations that examined in similar detail how small molecules interact with skin defense peptides to affect their potency. The results of these simulations were compared to the results of extensive laboratory experimental tests conducted by Unilever to confirm our computational predictions on a range of niacinamide analogs with differing abilities to promote AMP activity in lab models.

We first used physical modeling to determine the effects of the B3 analogs on LL37, a common AMP on human skin. We then simulated these molecules using high-performance computing to predict their performance and generate detailed time-bound simulations that allowed us to “see” these interactions in molecular detail. This work enabled us to demonstrate that niacinamide (and another analog, methyl niacinamide) could indeed naturally boost the effect of the AMP peptide LL37 on the bacterium Staphylococcus aureus, an organism widely associated with skin infections.

A radical discovery process — and a map for hunting new bioactives


Our work has helped us understand how these molecules can improve hygiene, but it also provided us with a deeper understanding of the molecular mechanisms responsible for enhanced AMP performance, by pairing simplified model systems and advanced computation that radically accelerated technology evaluation. We believe this workflow can allow us to create innovative and sustainable products that can help to protect us from pathogens both now and in the future.

IBM, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM AI
The scientific method, applied to peptides.

This research was made possible by our and our partners’ capabilities in high-performance computing. Combining these technologies allowed us to supercharge the scientific method to promote discovery at a far more rapid pace, a process we’ve come to call accelerated discovery. 

We’re excited that our work can help Unilever better understand how to leverage AMPs in future products to help countless people around the world through the development of effective and sustainable hygiene products, while complying with the applicable regulations..

For us at IBM, this work is also the start of an exciting new chapter as we explore how this work can help accelerate research into other harmful pathogens, such as Methicillin-resistant Staphylococcus aureus (MRSA), that can cause severe disease if their growth is not controlled. More broadly, this work opens a new pathway to discovering natural, small-molecule boosters to amplify the function of antimicrobial peptides Our understanding of these mechanisms and the process we used can be applied for other research, for example, in the search for novel antimicrobials.

This was a cross-industry academia partnership that spanned the globe, with scientists from India and the UK coming together to solve germane and pressing problems with real world application. We hope one of the lasting impacts of this work is that for future research in this field, we’re able to choose or devise computational models simple enough to capture essential biological processes — without adding unnecessary time or complexity.

Source: ibm.com

Saturday, 7 May 2022

Difference between AIX and IBM i

AIX, IBM i, IBM Exam Prep, IBM Career, IBM Jobs, IBM Learning, IBM Certification, IBM Tutorial and Material

1. AIX :

AIX is a series of proprietary operating systems which is provided by IBM. AIX stands for Advanced Interactive eXecutive. Initially it was designed for the IBM RT PC RISC workstation and later it was used for various hardware platforms like IBM RS/6000 series, PowerPC-based systems, System-370 mainframes, PS-2 personal computers and Apple Network Server. It is one of the five commercial operating systems that have versions certified to UNIX 03 standard of The Open Group. The first version of AIX was launched in 1986. The latest stable version of AIX is 7.2.

2. IBM i :

IBM i is an operating system or operating environment which is provided by IBM. It provides an abstract interface to IBM Power Systems. It works through the layers of low level machine interface code or microcode that reside above the Technology Independent Machine Interface and the System Licensed Internal Code or kernel. It enables the IBM Power platform to support a wide variety of business applications and can co-exist alongside other operating systems. It is a closed source operating system. The first version of IBM i was launched in 1988. The latest stable version of IBM i is 7.3.

Difference between AIX and IBM i :

AIX IBM i 
It was developed and is owned by IBM.  It was developed and is owned by IBM.
It was launched in 1986.  It was launched in 1988. 
Its target system type is Server, NAS and workstation.  Its target system type are minicomputer and server. 
Kernel type is Monolithic with modules.  Kernel type is Microkernel and Virtual machine. 
Preferred license is Proprietary.  Preferred license is Proprietary 
It is used for personal computers.  It is not used for personal computers. 
It is used in computers of all companies.  It is majorly used in IBM devices.
File systems supported are NTFS, FAT, ISO 9660, UDF.  File systems supported are JFS, JFS2, ISO 9660, UDF, NFS, SMBFS and GPFS. 

Source: geeksforgeeks.org

Tuesday, 3 May 2022

IBM

How to win your SWIFT challenge

IBM Exam Study, IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Skills, IBM Jobs, IBM Preparation, IBM Swift, IBM

We are living in a cloud era, with new tools, programming languages, and technologies evolving at a much higher speed than even just 2 years ago. Workers need to refresh and resharpen skills on a regular basis. Financial institutions must embrace these changes and be prepared for the technological shifts and the innovative features needed to compete in the financial market.

New fintechs emerge every year with greater ideas and faster technologies. Initiatives like blockchain and real-time transaction settlements, Decentralized Finance (DeFi), and the Internet of Things (IoT) place pressure on larger institutions to ensure they move at the same pace and direction. It is challenging for these larger FIs to stay agile in the payments world when they are hindered by legacy technologies and the traditional ways of managing them.

Keeping up with SWIFT

If you are already in the financial sector, you likely have heard of SWIFT, also known as The Society for Worldwide Interbank Financial Telecommunications. SWIFT is a global member-owned cooperative and the world’s leading provider of secure financial messaging services.

Today, as goods and services move more quickly and across greater distances, financial transactions need to move further and faster as well. SWIFT securely moves values around the world while meeting the high demands and standards for regulatory compliance. No other organization can address the scale, precision, pace and trust that SWIFT provides to its user community.

SWIFT continues to refresh and evolve its platform to ensure it remains modern, powerful, reliable and feature-rich. SWIFT regularly adds new innovative capabilities and functionalities, develops new forms of connectivity, eases service consumption, and ensures secure user access.

In addition, SWIFT constantly renews their product portfolio in response to the needs of their user community. They foster a culture of innovation to bring new offerings to market while preserving a no-risk approach to the maintenance and evolution of their mission-critical core.

The challenges for financial institutions to keep up with SWIFT are heavy and demanding, but they are also crucial for survival. FIs must move quickly to adopt all of SWIFT’s initiatives and embrace new changes with agility. What does this mean for our FIs?

Keeping up takes a heavy investment on the FI’s core platforms, infrastructure, and resources, which requires continuous learning and certification, operations excellence, improved skills to support new platforms, and tighter SLAs to guarantee a reliable service for their clients.

Client story: a turnkey solution from the IBM Payments Center™

One institution that met these challenges is an essential FI in the Canadian economic system. This FI once had a heavy on-premises SWIFT infrastructure with unsupported machines and operating systems and few skills to support application operations. In addition, there were SWIFT’s requirements to adopt new standards, to keep up with the SWIFT roadmap, ISO 20022 migration, and the frequent updates and patching to secure a sensitive cross-border payments platform. The FI also faced the increasing cost of support and operations needed for the latest infrastructure upgrades.

The IBM Service Bureau for SWIFT from the IBM Payments Center™ (IPC) was a turnkey solution to address this client’s challenge. Over the span of a year, the IPC built a dedicated SWIFT infrastructure into IBM’s private cloud, operated entirely by IBM and supported by SWIFT certified experts. No other SWIFT service bureau could offer a solution at this scale.

IBM Exam Study, IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Skills, IBM Jobs, IBM Preparation, IBM Swift, IBM
The solution consisted of deploying a fully redundant SWIFT infrastructure, with mission-critical components: SWIFT’s Alliance Connect Gold was deployed by IBM in multiple sites, and dual Alliance Access and Web platform instances were deployed in each site; hot standby cross-connect SWIFT instances were established; and fully redundant Backoffice connectors were implemented, with the entire setup guaranteeing a 99.99% uptime.

The complete Customer Security Program (CSP), as prescribed by SWIFT, was an integral part of the solution. As a result, the client didn’t have to implement all the compliance controls themselves. Moreover, patches, new releases, and SWIFT standards deployments were fully managed by the IBM SWIFT team. All this was crowned with 24/7 fully managed operation and support. The CSP provided real value to the FI by reducing or eliminating many of the costly challenges they faced.

Knowing their core application is being handled with care, the client has regained peace of mind and is able to now focus more on its core business.

At the end of the day, FIs must choose their battles, and a technology-focused battle, with ever-increasing costs, demands and skills, isn’t easy. The IBM Payments Center’s deep experience in technologies and payments helps FIs across the world win at every scale.

Source: ibm.com

Sunday, 1 May 2022

Using AI to reinvent the enterprise

IBM Exam Study, IBM Exam Prep, IBM Career, IBM Learning, IBM Tutorial and Material, IBM Skills, IBM Jobs, IBM Exam Study

From how businesses communicate with their customers through virtual assistants, to automating key workflows and even managing network security, there is no doubt that AI is a catalyst for accelerating top-line impact, causing disruption and unlocking new market opportunities.

At IBM’s recent Chief Data and Technology Officer Summit, I had an exciting conversation with Mark Foster, Chairman, IBM Consulting, where we talked about how enterprises are using AI to reinvent themselves, their main challenges and how they see investing in AI over the next 24 months.

With the accelerated pace of many organizations’ digital transformation, we have seen the emergence of new platform business models. These models enable enterprises to make better use of data, and achieve their strategic business objectives, through improved service to their clients, more efficient operations, and better experiences for their employees.

Organizations have been digitally transforming in two ways simultaneously: from the inside-out and the outside-in. The ability to apply AI, automation, blockchain, the Internet of Things (IoT), 5G, cloud and quantum computing at scale drives the inside-out cognitive transformation of organizations. And organizations also experience outside-in reinvention, a new way to reach, engage and enable customers to interact with the enterprise, with responsible use of the exploding volumes of data companies now hold.

Now, we are seeing a new third dimension of digital transformation: openness of business platforms across their ecosystem, resulting in a Virtual Enterprise. By stretching intelligent workflows and virtualized processes across broader systems, the return on investment of a Virtual Enterprise compounds from the resulting ecosystems, digital workflows and networked organizations. The Virtual Enterprise is supported by a “golden thread” of value that animates the enterprise and binds ecosystem participants. A key characteristic of the Virtual Enterprise is data-led innovation – the openness of the virtual enterprise accelerates access to new sources of product and service innovation using innovative technologies like AI to do it.

The challenges to successful AI adoption

1. Strategic perception – with the advent of the Virtual Enterprise, the complexity of organizations has increased. While some enterprises have a clear vision of what they want to be, many are struggling with that big picture.

2. Execution – Delivering transformation at scale remains the main challenge for many enterprises to continue their digital reinvention. How fast and how much can the business model be transformed?

3. Skills – Lack of skills inside the organization is one of the top challenges. IBM Garage Methodology has been helping many of our clients navigate skills gaps and solve significant problems using their data, new technologies, and existing ecosystems.

Companies that can overcome adoption and deployment barriers and tap AI and automation tools to tackle these challenges will be able to deliver value from AI.

Investing in AI

Businesses plan to invest in all areas of AI, from skills and workforce development to buying AI tools and embedding those into their business processes, creating agile learning systems that will build applications more efficiently and effectively.

Over the next 24 months, most AI investments will continue to focus on key capabilities that define AI for business — automating IT and processes, building trust in AI outcomes, and understanding the language of business.

In our previous CDO/CTO Summit, “Leadership During Challenging Times,” I shared how enterprises are becoming more intelligently automated, data-driven and predictive; risk-aware and secure. Leaders are designing organizations for agility and speed by infusing AI across the foundational business functions: customer care, business operations, the employee experience, financial operations and, of course, IT operations. I believe these investments will continue to accelerate rapidly as customers look for new, innovative ways to drive their digital transformations by using hybrid cloud and AI.

Source: ibm.com

Saturday, 30 April 2022

How a data fabric overcomes data sprawls to reduce time to insights

IBM Exam Study, IBM Exam Preparation, IBM Career, IBM Skills, IBM Learning, IBM Jobs, IBM Preparation

Data agility, the ability to store and access your data from wherever makes the most sense, has become a priority for enterprises in an increasingly distributed and complex environment. The time required to discover critical data assets, request access to them and finally use them to drive decision making can have a major impact on an organization’s bottom line. To reduce delays, human errors and overall costs, data and IT leaders need to look beyond traditional data best practices and shift toward modern data management agility solutions that are powered by AI. That’s where the data fabric comes in.

Read More: C2090-913: IBM Informix 4GL Development

A data fabric can simplify data access in an organization to facilitate self-service data consumption, while remaining agnostic to data environments, processes, utility and geography. By using metadata-enriched AI and a semantic knowledge graph for automated data enrichment, a data fabric continuously identifies and connects data from disparate data stores to discover relevant relationships between the available data points. Consequently, a data fabric self-manages and automates data discovery, governance and consumption, which enables

enterprises to minimize their time to value. You can enhance this by appending master data management (MDM) and MLOps capabilities to the data fabric, which creates a true end-to-end data solution accessible by every division within your enterprise.

Data fabric in action: Retail supply chain example

To truly understand the data fabric’s value, let’s look at a retail supply chain use case where a data scientist wants to predict product back orders so that they can maintain optimal inventory levels and prevent customer churn.

Problem: Traditionally, developing a solid backorder forecast model that takes every factor into consideration would take anywhere from weeks to months as sales data, inventory or lead-time data and supplier data would all reside in disparate data warehouses. Obtaining access to each data warehouse and subsequently drawing relationships between the data would be a cumbersome process. Additionally, as each SKU is not represented uniformly across the data stores, it is imperative that the data scientist is able to create a golden record for each item to avoid data duplication and misrepresentation.

Solution: A data fabric introduces significant efficiencies into the backorder forecast model development process by seamlessly connecting all data stores within the organization, whether they are on-premises or on the cloud. It’s self-service data catalog auto-classifies data, associates metadata to business terms and serves as the only governed data resource needed by the data scientist to create the model. Not only will the data scientist be able to use the catalog to quickly discover necessary data assets, but the semantic knowledge graph within the data fabric will make relationship discovery between assets easier and more efficient.

The data fabric allows for a unified and centralized way to create and enforce data policies and rules, which ensures that the data scientist only accesses assets that are relevant to their job. This removes the need for the data scientists to request access from a data owner. Additionally, the data privacy capability of a data fabric ensure the appropriate privacy and masking controls are applied to data used by the data scientist. You can use the data fabric’s MDM capabilities to generate golden records that ensure product data consistency across the various data sources and enable a smoother experience when integrating data assets for analysis. By exporting an enriched integrated dataset to a notebook or AutoML tool, data scientists can spend less time wrangling data and more time optimizing their machine learning model. This prediction model could then easily be added back to the catalog (along with the model’s training and test data, to be tracked through the ML lifecycle) and monitored.

How does a data fabric impact the bottom line?

With the newly implemented backorder forecast model that’s built upon a data fabric architecture, the data scientist has a more accurate view of inventory level trends over time and predictions for the future. Supply chain analysts can use this information to ensure that out of stocks are prevented, which increases overall revenue and improves customer loyalty. Ultimately the data fabric architecture can help significantly reduce time to insights by unifying fragmented data on a singular platform in a governed manner in any industry, not just the retail or supply chain space.

Source: ibm.com

Thursday, 28 April 2022

Don’t underestimate the cloud’s full value

IBM, IBM Exam, IBM Exam Study, IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Career, IBM Jobs, IBM Skills

As the adoption of cloud computing grows, enterprises see the cloud as the path to modernization. But there is quite a bit of confusion about the actual value of the cloud, leading to suboptimal modernization journeys. Most people see the cloud as a location for their computing needs. While cloud service providers (CSPs) are an integral part of the modernization journey, the cloud journey doesn’t stop with picking a CSP. A more holistic understanding of the cloud will help reduce risks and help your organization maximize ROI.

Before joining IBM Consulting, I was an industry analyst focused on cloud computing. As an analyst, I regularly spoke to various stakeholders about their respective organization’s modernization journeys. While I learned about many success stories, I also heard about challenges during suboptimal modernization efforts. Some organizations treated the cloud as a destination rather than exploiting its fullest potential. Others couldn’t move to the cloud due to data issues, a local lack of cloud regions, and their need to adopt IoT/edge computing. A successful cloud journey requires understanding the cloud’s full value and using a proper framework for adoption.

Read More: C2090-011: IBM SPSS Statistics Level 1 v2

While cloud service providers transform the economic model for infrastructure, the most significant advantage offered by the cloud is the programmability of infrastructure. While many enterprises associate programmability with self-service and on-demand resource provisioning and scalability, the real value goes much further. Unless you understand this advantage, any cloud adoption as a part of the modernization journey will be suboptimal.

A programmable infrastructure provides:

◉ Self-service provisioning and scaling of resources (the usual suspects)

◉ Programmatic hooks for application dependencies, helping modernize the applications with ease

◉ A programmatic infrastructure that transforms infrastructure operations into code, leading to large-scale automation that further optimizes speed and ROI.

Let’s unpack that third point a bit further. Modern enterprises are using programmatic infrastructure with code-driven operations, DevOps, modern architectures like microservices and APIs to create an assembly line for value creation. Such an approach decouples organizations from undifferentiated heavy lifting and helps them focus on the core customer value. With modern technologies like the cloud and a reliable partner to streamline the value creation process in an assembly line approach, enterprises can maximize the benefits of their modernization journey.

By treating Infrastructure as Code (IaC), you can achieve higher levels of automation. This, combined with the elastic infrastructure underneath, allows organizations to achieve the global scale they need to do business today. The code-driven operational process on top of a programmable infrastructure offers more value than the underlying infrastructure itself. IaC-driven automation reduces cost, increases speed, and dramatically removes operational problems by reducing human-induced errors.

Enterprises today have their applications deployed on cloud service providers. Some applications are nearing the end of life or have strict regulatory requirements to stay on-premises. Edge computing and IoT are fast gaining adoption among many enterprises. These trends shift our thinking about computing from a localized paradigm like CSPs and data centers to a more fluid paradigm of computing everywhere.

Everything from infrastructure to DevOps to applications are driven by automation, which allows organizations to scale efficiently. When this is married to the concept of cloud fabric that spans public cloud providers, data centers and edge, organizations can gain value at scale without worrying about how to manage different resources in different locations. This hybrid approach to the cloud can deliver value at the speed of a modern business.

Source: ibm.com