Friday 29 January 2021

Busting the biggest myths about modernization

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Certification, IBM Career

There are a number of good reasons for your organization to modernize its applications and architecture. You could be seeking to lower costs across development, maintenance and operations. Perhaps you want to improve user experiences or react more quickly to new business requirements and opportunities. Whatever the reason, you’re likely feeling overwhelmed by the sheer number of factors involved—and uncertain about the best way forward.

You’re not the first to tread this path and can refer back to what’s worked—and hasn’t worked—for other businesses. All that said, you’ve probably heard a few common misconceptions about modernization. Here are four big ones:

1. Moving everything off of the IBM Z® platform will reduce costs and save millions of dollars every year.

2. Moving read-only type applications (those that read data but do not update data) off of the IBM Z platform will always reduce costs.

3. Modernization means re-writing the entire application.

4. Application agility is only possible in the cloud.

As the buzz around cloud computing amplified a few years ago, many customers began moving some on-premises workloads, and creating new ones, in the cloud. But upon migrating the initial, “easy” 20% of their workloads, they stumbled at the highly complex 80% that remained—with more stringent requirements for performance, security and data consistency. These operations are best performed on the IBM Z platform. So, what does that mean for a modernization strategy?

Analysts and industry leaders have agreed on the answer: hybrid cloud. This framework has become the new normal across enterprise computing and can be achieved with a well-architected combination of cloud and the IBM Z platform.

So, let’s return to that first misconception. What would modernization look like if you decided to move everything to the cloud?

Breaking down operating costs: Best-case scenario

We’ve seen new executives try to make a splash when joining a company. For example, an executive promises to cut IT costs in half by creating a state-of-the-art reservation system and getting their company off of the IBM Z platform.

According to their estimation, all they need is five years and $100 million. But their accountants do the math and realize that it would take at least 15 years to break even and begin realizing a return on their investment.

Let’s break that down. For the sake of this example, let’s say their annual operating cost for the IBM Z architecture is $20 million, including hardware, software, database, networking, and operational expenses. On the other hand, the company expects that a new, state-of-the-art cloud-based system would cost $10 million a year.

As they spend $20 million annually to develop their new system, they will also spend $20 million a year to run their existing system—so, after five years of development, the business has spent $200 million on their reservation systems. Then, after a decade of using the new cloud-based system, costing $10 million annually ($100 million over 10 years), they can expect to have incurred a total of $300 million in expenses.

If instead of pouring resources, money, and time into this new system, they’d remained on the IBM Z platform for those 15 years, their expense would equal $300 million as well.

So, while this new executive effectively made a splash, it probably wasn’t the kind of splash they hoped for. Instead of optimizing what already worked, they doubled the expenses for five years, lost the opportunity costs to develop new functionality, and had to re-write and re-platform their applications and data for the cloud-based system. Beyond those initial pains, they now have a larger, more complicated environment to manage and debug. Instead of a single cluster environment on IBM Z, they’re now working with dozens of applications and database clusters.

This is all a best-case scenario. Because, if development is dragged out past the expected five years, or the new platform doesn’t run as cheaply as expected, their break-even point is now decades beyond where they originally expected.

Breaking down operating costs: Worst-case scenario

Now, let’s talk about a worst-case scenario, where a newly developed solution actually costs more to run wholly on the cloud than it did on the IBM Z platform.

First, Company A wrote a brand-new cloud native reservation platform costing upwards of $100 million to develop. After merging with Company B, which used an IBM Z based reservation system, they had to determine which system to keep.

They did an apples-to-apples comparison of the cost of each bookable item (such as an airline flight, hotel, train) on both platforms. The result? Using their new cloud-based reservation system, Company A only managed 1/3 of the bookable items of Company B and their operating expenses were 3x higher.

In other words, their newly fashioned architecture was 9x more costly than the IBM Z platform. It won’t require any mental gymnastics to figure out which system they decided to keep.

The second scenario illustrates that cost is not the only factor to consider.

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Certification, IBM Career
A financial industry customer developed a new cloud-native system for real-time transaction processing, but once deployed into production, the system had issues processing larger workload volume, reliability, and meeting other SLAs. In the end, the cloud-based system was replaced by a z/TPF system to process that workload.

The key factors in both customer examples are the fundamental differences in architecture. The winning solution was z/TPF, with applications and data co-located on an OS specifically designed for high-volume, write-intensive workloads, all leveraging IBM Z hardware. The cloud-based systems, with its workloads split over dozens of server clusters, running on general-purpose operating systems on commodity hardware, created several issues. In one case, it was nearly an order of magnitude more costly, and in the other, it struggled with latency, performance and database contention—all factors that impede workload scaling—and made it nearly impossible to maintain SLAs.

Developing a hybrid model

The point is not to run all your workloads on the IBM Z platform, just as you shouldn’t run all workloads on cloud. A hybrid model, making efficient use of the strengths of all available systems, is the key to achieving the lowest possible cost per transaction and maintaining SLAs. Your aim should be to progressively modernize key assets, while connecting your components through open standards, leveraging micro and macro services and building out an event-driven hybrid cloud architecture.

In a future post, I’ll discuss the distribution of workloads across the hybrid cloud, and dig into what, where and why for designing an architecture best positioned for many more years of success. I also hope to tackle the remaining misconceptions, explain how to modernize existing z/TPF assets and how to leverage modern DevOps principles in your z/TPF development process.

Source: ibm.com

Tuesday 26 January 2021

Quantum-safe cryptography: What it means for your data in the cloud

IBM Exam Prep, IBM Preparation, IBM Certification, IBM Guides

Quantum computing holds the promise of delivering new insights that could lead to medical breakthroughs and scientific discoveries across a number of disciplines. It could also become a double-edged sword, as quantum computing may also create new exposures, such as the ability to quickly solve the difficult math problems that are the basis of some forms of encryption. But while large-scale, fault-tolerant quantum computers are likely years if not decades away, organizations that rely on cloud technology will want cloud providers to take steps now to help ensure they can stay ahead of these future threats. IBM Research scientists and IBM Cloud developers are working on the forefront to develop new methods to stay ahead of malicious actors.

Hillery Hunter, an IBM Fellow, Vice President and CTO of IBM Cloud, explains how IBM is bringing together its expertise in cloud and quantum computing with decades of cryptographic research to ensure that the IBM Cloud is providing advanced security for organizations as powerful quantum computers become a reality.

It’s probably best to start this conversation with a quick overview of IBM history in cloud and quantum computing.

IBM offers one of the only clouds that provides access to real quantum hardware and simulators. Our quantum devices are accessed through the IBM Q Experience platform, which offers a virtual interface for coding a real quantum computer through the cloud, and Qiskit, our open source quantum software development kit. We first made these quantum computers available in 2016. As of today, users have executed more than 30 million experiments across our hardware and simulators on the quantum cloud platform and published over 200 third-party research papers.

As a pioneer in quantum computing, we are taking seriously both the exciting possibilities and the potential consequences of the technology. This includes taking steps now to help businesses keep their data secure in the cloud and on premises.

How does security play into this? Why is it important to have a cloud that has security for quantum-based threats?

An organization’s data is one of their most valuable assets, and studies show that a data breach can cost $3.92 million on average. We recognized early that quantum computing could pose new cybersecurity challenges for data in the future. Specifically, the encryption methods used today to protect data in motion and at rest could be compromised by large quantum computers with millions of fault tolerant quantum bits or qubits. For perspective, the largest IBM quantum system today has 53 qubits.

To prepare for this eventuality, IBM researchers are developing a lattice cryptography suite called CRYSTALS. The algorithms in that suite are based on mathematical problems that have been studied since the 1980s and have not yet succumbed to any algorithmic attacks (that have been made public), either through classical or quantum computing. We’re working on this with academic and commercial partners.

These advancements build on the leading position of IBM in quantum computing, as well as decades of research in cryptography to protect data at rest and in motion.

How is IBM preparing its cloud for the post-quantum world?

We can advise clients today on quantum security and we’ll start unveiling quantum-safe cryptography services on our public cloud next year. This is designed to better help organizations keep their data secured while it is in-transit within IBM Cloud. To accomplish this, we are enhancing TLS and SSL implementations in IBM Cloud services by using algorithms designed to be quantum-safe, and leveraging open standards and open-source technology. IBM is also evaluating how we can provide services that include quantum-safe digital signatures, a high expectation in e-commerce.

While that work is underway, IBM Security is also offering a quantum risk assessment to help businesses discern how their technology may fare against threats and steps they can take today to prepare for future threats.

IBM also contributed CRYSTALS to the open source community. How will this advance cryptography?

Open-source technology is core to the IBM Cloud strategy. That’s why IBM developers and researchers have long been working with the open-source community to develop the technology that’s needed to keep data secured in the cloud.

IBM Exam Prep, IBM Preparation, IBM Certification, IBM Guides

It will take a community effort to advance quantum-safe cryptography and we believe that, as an industry, quantum-safe algorithms must be tested, interoperable and easily consumable in common security standards. IBM Research has joined OpenQuantumSafe.org and is contributing CRYSTALS to further develop open standards implementations of our cryptographic algorithms. We have also submitted these algorithms to the National Institute of Standards and Technology for standardization.

Some organizations might not worry about these security risks until quantum computing is widespread. Why should they be acting now?

Although large-scale quantum computers are not yet commercially available, tackling quantum cybersecurity issues now has significant advantages. Theoretically, data can be harvested and stored today and potentially decrypted in the future with a fault-tolerant quantum computer. While the industry is still finalizing quantum-safe cryptography standards, businesses and other organizations need to get a head start.

Source: ibm.com

Saturday 23 January 2021

Why applications should drive your IT investments

IBM Exam Prep, IBM Certification, IBM Learning, IBM Guides, IBM Career, IBM Study Material

As business and IT leaders strive to optimize resources and reduce costs, it can be tempting to choose a single standard server to deploy for their entire organization. Commodity servers are perceived as low-cost and effective infrastructure, so it seems like a smart investment.

However, this temptation can yield the opposite results as intended.

Not only can it be expensive and time-consuming to migrate applications to a single standard server, but doing so can inadvertently introduce new issues, as different applications can have different infrastructure requirements. This is why it’s important to analyze an application’s total cost of ownership (TCO) before making this type of investment.

When all application requirements and data center costs are considered, commodity scale-out servers can actually drive up the total cost of IT spend and fail to deliver ROI. Meanwhile, after performing this analysis, the IBM Z® platform emerges as a superior infrastructure option for many applications.

Let’s look at offload assessments performed by IBM IT Economics that help you see how IBM Z can lower your IT infrastructure costs.

How the evolution of application lifecycle management impacts TCO

IT applications – and the business processes implemented within them – represent the real assets of modern organizations. This is why organizations still run applications that were originally deployed more than half a century ago.

But application lifecycle management (ALM) has changed over the years, and organizations have adopted new trends while maintaining already-deployed applications. This has created different layers of technologies, and application integration has become a crucial challenge for every organization.

Application development and maintenance requires significant resources. And, after a few years, the total cost of an application becomes much larger than the cost of the infrastructure it runs on.

How IBM Z can lower TCO and improve ROI

Many companies find that continuing to run their applications on IBM Z yields a lower TCO and higher ROI than new ALM techniques that promise a lower TCO for applications running on other systems.

Here’s an example: At the request of clients, IBM IT Economics performed TCO offload assessments involving rewrites of up to 10 million lines of application code. The assessment results included:

◉ Clients had—on average—a 2x lower annual TCO keeping their applications on IBM Z versus moving to an x86-based infrastructure. This was mainly due to underestimating application migration costs, parallel environment maintenance period costs and the sizing of the equivalent x86 infrastructure once fully deployed.

◉ Offload projects ran beyond their planned completion date and budget as well as falling short of the project’s planned scope due to the complexity of migrating applications. Even in cases considered technically successful, analysis found that the project either faced a long ROI break-even point of 20 or more years, or none whatsoever.

The risks of long project duration and high cost tend to be why many companies avoid application migration and ultimately keep their applications running on IBM Z. But many also find cost savings through optimization of their IBM Z environment or by exploring new service provisioning models such as shared data centers or IBM Z cloud offerings.

Several characteristics of IBM z15™ enable solutions to meet the most challenging business requirements while minimizing IT costs.

Seven-nines availability enables enterprises to efficiently deliver business-critical solutions with high availability.

◉ Data security and privacy through pervasive encryption and Data Privacy Passports

◉ Scalability with up to 190 cores per system and often with many systems clustered in Parallel Sysplex

These advancements, combined with investments made in IBM Z applications running in thousands of enterprises worldwide, have convinced many IBM Z clients to further enhance and expand their mainframe environments. Instead of replatforming their applications to another hardware architecture, you can applications and modules on IBM Z that extend the capabilities of their original applications.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Guides, IBM Career, IBM Study Material

Below are some additional examples where IBM Z provides a cost-effective TCO case.

Java® applications through the exploitation of IBM Z specialty engines

◉ Workloads can leverage IBM Z Integrated Information Processors (zIIPs) and the integration of JVM with IBM z/OS® to minimize general processor compute charges and software license charges by offloading the work to zIIPs.

◉ Workloads can also leverage Integrated Facility for Linux® (IFLs) on IBM Z. Because Java workloads can be densely consolidated onto IBM Z, requiring fewer processor cores than on x86 servers, middleware software costs for both the z/OS and Linux on IBM Z environments can be significantly reduced.

◉ Workload consolidation analysis from 17 IBM IT Economics assessments found that the same Java workloads on IBM LinuxONE™ or IBM Z provided—on average—a 54% lower TCO over five years than on compared x86 servers.

Competitive database solutions

◉ The number of software licenses can be reduced by as much as 78%.

◉ Sizing analysis from IBM IT Economics assessments of clients with business-critical loads show most x86 Linux workloads have a core consolidation ratio ranging from 10 to 32.5 distributed cores to one IFL with an average of 17x fewer cores.

Data warehouse and transactional business intelligence solutions:

◉ If the master copy of the data already resides on the IBM Z system in IBM z/OS and in Linux on Z partitions for both structured data and big data repositories, these applications are already co-located with the data, eliminating the need to support off-platform environments.

◉ While TCO differences vary for these solutions depending on each of the client’s requirements, workload footprints are centralized on a single platform, bringing infrastructure savings and efficiencies with reduced latency.

The Benefits of IBM Z

IBM Z enables legacy and open environments to coexist on the same hardware platform so that businesses can streamline operations and optimize costs. IBM Z can provide a lower TCO compared to alternative scale-out solutions while enabling applications to exploit the latest development and delivery approaches on an enterprise proven infrastructure.

Source: ibm.com

Tuesday 19 January 2021

Open source: A catalyst for modernization & innovation

IBM Exam Prep, IBM Study Material, IBM Certification, IBM Learning, IBM Preparation

With any true progress there needs to be real collaboration. That adage rings true in most disciplines of sciences and business, and it also defines the use of open source for IT development and innovation.  

In our budding digital era, open source platforms allow for the evolutionary, optimal and secure integration of new tech and new applications into existing infrastructure. Open-source collaboration has many proven advantages including greater engagement by developers, which leads to more productivity, flexibility, and cutting-edge innovation at lower costs. It is all about community. A diverse developer ecosystem, combined with the tenets of open source; open licenses, open governance, and open standards are helping guide the quality, performance and modernization of IT. 

Historically, open-source collaboration has led to scalable and more secure applications for computing, especially via Linux (still the choice of most code development), Java, Node.js, and other enterprise platforms. IBM has been a trailblazer when it comes to the use and encouragement of open source. Over the years, IBM has helped create and lead the Linux Foundation, Apache Foundation, Eclipse Foundation, Cloud Foundry, Docker (with Google), Open Stack (infrastructure-as-a-service), OpenWhisk (serverless platform) that have served as vibrant catalysts for the developer community. 

Open-source platforms are also impacting emerging tech practice areas of artificial intelligence, blockchain, the Internet of Things, deep learning and quantum computing. Looking forward to “Industry 4.0”, IBM has become a force in directing the Cloud Native Computing Foundation (Kubernetes), Hyperledger (blockchain), CODAIT (the Center for Open Source, Data and AI Technology), MAX (Model Asset Exchange for deep learning), MQTT (leading protocol for connecting IoT devices), and Qiskit, an open–source quantum computing framework. The recent acquisition of Red Hat will continue to elevate IBM as a leader in existing and future open-source communities, especially in the areas of microservices and automation. 

Open source as a catalyst for modernization & innovation 


For business, open source is a catalyst for orchestration in an environment where data and applications are often in multiple locations including the cloud, multicloud, hybrid cloud, and mainframes. Agile open-source platform cocreation enables legacy and new systems to support digital business interface that span the entire IT landscape wherever data and applications may reside. In a nutshell, open source offers the best of on premises to your digital enterprise as a service in the cloud. 

IBM Exam Prep, IBM Study Material, IBM Certification, IBM Learning, IBM Preparation
Open-source platforms are made to be inclusive for communities which need shared tools. An open box toolbox can act as an adaptive enabler, allowing IT teams to flourish. An open-source toolbox is fundamental for programmers and IT managers and includes a variety of utilities including technology refreshes, compression algorithms, cryptography libraries, development tools and languages.   

Each year, these toolboxes continue to grow in applications and capabilities stimulated by an active and transparent developer community. This transparency is also helpful to cybersecurity as cocreation in code development can help catch and fix bugs throughout the collaborative process. Developers can analyze every bit of source code that may pose risk and remove it. You have a whole community of eyes watching over security threats that can offer effective cybersecurity tools.   

Over the years, open source has led to seismic improvements in IT planning and decision making. Open source will continue to play an integral role in the future as IT infrastructures are modernized by tomorrow’s investments. Open source is a contributing catalyst for innovation as we address the challenges of the digital era.  

IBM recently hosted am open-source webinar featuring industry experts Bola Rotibi, Research Director at CCS Insight, and Elizabeth Stahl, Distinguished Engineer at IBM Garage. The webinar explores the technologies, processes, and C-Suite budget decisions that are required to build and maintain digital applications, and how they are accelerated through open-source platforms. The webinar also reveals how open-source platforms serve as a value differentiator because they allow for user feedback and rapid integration of improvements. 

Working together, IT teams can deliver less risky products that can often be a competitive advantage to a business or organization. Open-source collaboration removes barriers to innovation, incorporates skill sets and instills trust.

Source: ibm.com

Saturday 16 January 2021

IBM’s Squawk Bot AI helps make sense of financial data flood

IBM Exam Study, IBM Learning, IBM Certification, IBM Career

Analysts’ reports, corporate earnings, stock prices, interest rates. Financial data isn’t an easy read. And there’s a lot of it.

Typically, teams of human experts go through and make sense of financial data. But as the volume of sources keeps surging, it’s becoming increasingly difficult for any human to read, absorb, understand, correlate, and act on all the available information.

We want to help.

In our recent work, “The Squawk Bot”: Joint Learning of Time Series and Text Data Modalities for Automated Financial Information Filtering, we detail an AI and machine learning mechanism that helps to correlate a large body of text with numerical data series describing financial performance as it evolves over time. Presented at the 2021 International Joint Conferences on Artificial Intelligence Organization (IJCAI), our deep learning-based system pulls from vast amounts of textual data potentially relevant textual descriptions that explain the performance of a financial metric of interest — without the need of human experts or labelled data.

Dubbed The Squawk Bot, the technology falls within the sub-field of AI and machine learning known as multimodal learning. This type of learning attempts to combine and model the data obtained from multiple data sources, potentially represented in different data types and forms.

While multimodal learning has been extensively used for video captioning, audio transcription, and other applications that combine video, images, and audio with text, there have been much fewer studies on linking texts and numerical time series data.

It’s this gap that has sparked our interest, along with real-world discussions on popular financial commentary shows, like CNBC’s “The Squawk Box” and other financial news programs. There, financial and business experts attempt to explain the performance of a financial asset — say, a stock price — or some activity in an economic sector through commentary and information from a variety of sources based on their domain expertise.

The Squawk Bot automatically filters large amounts of textual information and extracts specific bits that might be related to the performance of an entity of interest as it evolves over time. The AI does so without the strict requirement of pre-aligning the text and time series data, or even explicitly labeling the data. The model automatically finds these cross-modality correlations and ranks their importance, providing user guidance for understanding the results.

Initial evaluation of our mechanism has shown promising results on large-scale financial news and stock prices data spanning several years. Our model automatically retrieved more than 80 percent of the relevant textual information (such as news articles) related to stock prices of interest that we had selected for our experiments. The model did so without prior knowledge of any human expert, without any application domain expertise, or the usage of any specific keywords or phrases.

IBM Exam Study, IBM Learning, IBM Certification, IBM Career
The research is still ongoing, and we are now exploring various approaches to reduce the amount of data required by the model through data augmentation and few-shot learning. We are also looking into enriching the model with domain knowledge to make the retrieval of relevant content more targeted, to further improve the explainability of the learning process.

The next step is to broaden the applications of our model in the wider world of investment management, particularly for the analysis of financial text data and insights on investment decisions. We are also investigating how the model could be used as a “noise reduction” system — meaning getting the AI to retrieve only the most-relevant text for a given financial asset so that the text could be used for extraction of trading signals.

Another interesting application of the Squawk Bot would be in marketing campaigns, in particular for the discovery of the content that would best resonate with a given marketing performance metric. Soon, financial data won’t be that difficult to make sense of — for anyone.

Source: ibm.com

Tuesday 12 January 2021

Light and in-memory computing help AI achieve ultra-low latency

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Career

Ever noticed that annoying lag that sometimes happens during the internet streaming from, say, your favorite football game?

Called latency, this brief delay between a camera capturing an event and the event being shown to viewers is surely annoying during the decisive goal at a World Cup final. But it could be deadly for a passenger of a self-driving car that detects an object on the road ahead and sends images to the cloud for processing. Or a medical application evaluating brain scans after a hemorrhage.

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Career
Our team, combined with scientists from the universities of Oxford, Muenster and Exeter as well as from IBM Research has developed a way to dramatically reduce latency in AI systems. We’ve done it using photonic integrated circuits that use light instead of electricity for computing. In a recent Nature paper, we detail our combination of photonic processing with what’s known as the non-von Neumann, in-memory computing paradigm – demonstrating a photonic tensor core that can perform computations with unprecedented, ultra-low latency and compute density.

Our tensor core runs computations at a processing speed higher than ever before. It performs key computational primitives associated with AI models such as deep neural networks for computer vision in less than a microsecond, with remarkable areal and energy efficiency.

When light plays with memory

While scientists first started tinkering with photonic processors back in the 1950s, with the first laser built in 1960, in-memory computing (IMC) is a more recent kid on the block. IMC is an emerging non-von Neumann compute paradigm where memory devices, organized in a computational memory unit, are used for both processing and memory. This way, the physical attributes of the memory devices are used to compute in place.

By removing the need to shuttle data around between memory and processing units, IMC even with conventional electronic memory devices could bring significant latency gains. IBM AI hardware center, a collaborative research hub in Albany, NY, is doing a lot of research in this field.

However, the combination of photonics with IMC could further address the latency issue – so efficiently that photonic in-memory computing could soon play a key role in latency-critical AI applications. Together with in-memory computing, photonic processing overcomes the seemingly insurmountable barrier to the bandwidth of conventional AI computing systems based on electronic processors.

This barrier is due to the physical limits of electronic processors, as the number of GPUs one can pack into a computer or a self-driving car isn’t endless. This challenge has recently prompted researchers to turn to photonics for latency-critical applications. An integrated photonic processor has much higher data modulation speeds than an electronic one. It can also run parallel operations in a single physical core using what’s called ‘wavelength division multiplexing’ (WDM) – technology that multiplexes a number of optical carrier signals onto a single optical fiber by using different wavelengths of laser light. This way, it provides an additional scaling dimension through the use of the frequency space. Essentially, we can compute using different wavelengths, or colors, simultaneously.

In 2015, researchers from Oxford University, the University of Muenster and the University of Exeter developed a photonic phase change memory device that could be written to and read from optically. Then in 2018, Harish Bhaskaran at Oxford (who is also a former IBM researcher) and myself found a way to perform in-memory computing using photonic phase-change memory devices.

Together with Bhaskaran, Prof. Wolfram Pernice of Muenster University and Prof. David Wright from Exeter, we initiated a research program that culminated in the current work. I fondly remember some of the initial discussions with Prof. Pernice on building a photonic tensor core for convolution operations while walking the streets of Florence in early 2019. Bhaskaran’s and Pernice’s teams made significant experimental progress over the following months.

But there was a challenge – the availability of light sources for WDM. This is required for feeding in the input vectors to a tensor core. Luckily, the chipscale frequency combs using nonlinear optics developed by Prof. Tobias Kippenberg from Swiss Federal Institute of Technology (EPFL) in Lausanne provided the critical breakthrough that overcame this issue.

Leaping into the future

Armed with these tools, we demonstrated a photonic tensor core that can perform a so-called convolution operation in a single time step. Convolution is a mathematical operation on two functions that outputs a third function expressing how the shape of one is changed by the other. An operation for a neural network usually involves simple addition or multiplication. One neural network can require billions of such operations to process one piece of data, for example an image. We use a measure called TOPS to assess the number of Operations Per Second, in Trillions, that a chip is able to process.

In our proof of concept, we obtained the experimental data with matrices up to the size of 9×4, with a maximum of four input vectors per time step. We used non-volatile photonic memory devices based on phase change memory to store the convolution kernels on a chip and used photonic chip-based frequency combs to feed in the input encoded in multiple frequencies. Even with the tiny 9×4 matrix, by employing four multiplexed input vectors and a modulation speed of 14GHz, we obtained a whopping processing speed of two trillion MAC (multiply-accumulate) operations per second, or 2 TOPS. The result is impressive since the matrix is so tiny – while we are not performing a lot of operations, we are doing them so fast that the TOPS figure is still very large.

And this is just the beginning. We expect that with reasonable scaling assumptions, we can achieve an unprecedented PetaMAC (thousand trillion MAC operations) per second per mm2.  In comparison, the compute density associated with state-of-the-art AI processors is less than 1 TOPS/mm2, meaning less than a trillion operations per second per mm2.

Our work shows the enormous potential for photonic processing to accelerate certain types of computations such as convolutions. The challenge going forward is to string together these computational primitives and still achieve substantial end-to-end system-level performance. This is what we’re focused on now.

Source: ibm.com

Saturday 9 January 2021

Blockchain and sustainability through responsible sourcing

IBM Exam Prep, IBM Learning, IBM Certification, IBM Guides, IBM Prep, IBM Career

Mining for cobalt, an essential raw material for lithium-ion batteries, carries a high cost in human suffering. More than 60 percent of the world’s supply comes from the Democratic Republic of Congo (DRC), with about 45 percent coming from large-scale mining operations. The remaining 15 percent comes from small-scale mines in the DRC, where children and adults labor under harsh and dangerous conditions to extract ore by hand.

It’s those working conditions, along with environmental damage to areas mined, that have focused the world’s eyes on the high-profile automotive and electronics industries. Their products — electric vehicles (EVs) smartphones and laptops — depend on lithium-ion batteries, and thus, cobalt.

What’s driving sustainable sourcing initiatives

Companies that take sustainability and social justice seriously work to keep cobalt mined by hand out of their supply chains. Similar concerns hold true for other minerals that pose responsible sourcing risks, including lithium, nickel, copper and the “3TG” group (tin, tantalum, tungsten and gold), and for materials that create a hazard for the environment when disposed of incorrectly.

Deploy the IBM Blockchain Platform across multiple environments

The drivers for responsible sourcing of raw materials are strong, ranging from corporate governance to consumer, shareholder and other stakeholder demands to scrutiny from governments, regulatory bodies, financial markets and NGOs. However, until recently, proving responsible sourcing to all these interested parties was an elusive goal, posing significant reputational, legal and commercial risks.

The benefits of blockchain for responsible sourcing networks

Today, the Responsible Sourcing Blockchain Network (RSBN), built on IBM Blockchain Platform and assured by RCS Global Group, is providing the transparency, trust and security that are needed to demonstrate responsible sourcing for cobalt. What’s more, the underlying infrastructure that we’ve built for RSBN can be used to jump-start any sustainable sourcing initiative.

The proven benefits of using blockchain technology for sustainable sourcing networks include:

◉ An immutable audit trail that documents proof of initial ethical production of a raw material and its maintenance at every transfer step from mine to end manufacturer

◉ Secure, tamper-evident storage of provenance information and certificates of responsible production

◉ The ability to share a proof of fact while protecting confidential or competitive information

◉ Decentralized control so no single entity can corrupt the process, promoting trust

◉ Lower costs through digitization of a paper process, potential reduction in audits and lower transaction costs

◉ Scalability to accommodate new participants and new industries

Blockchain and RSBN: How it works

IBM Exam Prep, IBM Learning, IBM Certification, IBM Guides, IBM Prep, IBM Career

Traditionally, miners, smelters, distributers and manufacturers rely on third-party audits to establish compliance with generally accepted industry standards. For RSBN, RCS Global Group provides this assurance through its mineral supply chain mapping and auditing practice, assessing network participants against standards and best practices set by the Organization for Economic Cooperation and Development (OECD) and the Responsible Minerals Initiative (RMI).

RSBN uses blockchain’s shared, distributed ledger to track production from mine to battery to end product, capturing information on the degree of responsible sourcing at each tier of the supply chain. Downstream companies can access the verified proof that they support and contribute to responsible sourcing practices, which they can then share with auditors, corporate governance bodies and even consumers.

At every step, participants control who is allowed to see what information. Once added to the ledger, certifications and other documents cannot be manipulated, changed or deleted. These properties make blockchain a trusted platform for sharing data across different companies while helping prevent fraud.

Delivering business value across the supply chain

Networks such as RSBN encourage and enable collaboration between suppliers and customers across complex mineral and other conflict-sourced raw material supply chains, with business value accruing to all that participate. For example, here are benefits realized by RSBN participants.

Automotive manufacturers are gearing up to introduce many more electric vehicles (EVs) into the marketplace over 2021 and 2022, creating new demand for li-ion batteries and the cobalt they require. By engaging in responsible sourcing initiatives like RSBN, they can market their products as sustainably produced as well as sustainable on the road. Responsible cobalt sourcing also contributes to corporate citizenship efforts related to fighting poverty, supporting human rights and preventing environmental degradation.

Mines and smelters sit at the top of the supply chain, where responsible sourcing efforts begin and where the real challenges lie. Keeping hand-mined cobalt out of product batches requires process changes, financial investments and a commitment to fight corruption. By demonstrating their results through audits and certifications, these companies are positioned for favored status as suppliers to battery manufacturers.

Consumers have spoken. In a recent IBM study, 77 percent of consumers surveyed said that buying from sustainable or environmentally responsible brands is important. EVs, smartphones and laptops are high-visibility products whose value is closely tied to their rechargeable, long-lasting li-ion batteries. Being able to demonstrate responsible sourcing can help win customers, establish reputational value and prevent backlash such as legal action.

Get started quickly on your responsible sourcing solution

RSBN is a ready-to-join network built on IBM Blockchain Platform and assured by RSC Global. RSBN founding members include Ford Motor Company, Volkswagen Group, global battery manufacturer LG Chem and cobalt supplier Huayou Cobalt. Volvo Cars, Fiat Chrysler Automobiles and other companies that operate in “conflict sourced” minerals supply chains are also members.

Companies that want to make sure the raw materials they use have audited and documented responsible sourcing standards confirmed can join RSBN either as an individual company or with their whole supply chain and begin realizing value quickly. Alternatively, IBM Blockchain Services can leverage the assets and infrastructure that underpin RSBN to co-create a responsible sourcing network for other industries and raw materials. Talk to us today about how you can get started demonstrating your organization’s responsible sourcing compliance.

Source: ibm.com

Thursday 7 January 2021

2020 Innovation in Order Management Awards

IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Exam Study

2020 has been an extraordinary time to be a retailer. The rate of acceleration for innovation to match fast-changing consumer behaviors has been remarkable. While it can be daunting to try new things, retailers who adjusted to implement new customer offerings and new order management and fulfillment solutions in the face of unique challenges month after month, have come out ahead.

To recognize the incredible work achieved in 2020, the IBM Sterling team is excited to present our virtual awards for “2020 Innovation in Order Management”:

Community Leader Award – JOANN


COVID-19 forced JOANN to close half of their stores in the U.S., but the items inside were still in high demand as eager shoppers worked to make masks or help provide personal protective equipment. JOANN leaned into the opportunity to help protect healthcare workers, first responders and communities, launching its “Make to Give” campaign to encourage mask-making at home.

JOANN also upheld a genuinely noble act of not furloughing any of their store associates by re-training them to pack and ship orders from closed stores. As a result, JOANN was able to quickly expand ship-from-store (SFS) to their entire network almost overnight.

Faster Time to Delivery Award – Kroger


Kroger plans for Thanksgiving months in advance to ensure their shelves are stocked and turkeys are available. This year, the team at Kroger encouraged shoppers to plan ahead and think about other ways to shop to avoid stepping inside their stores. “We always tell our customers use our website, use our app. Plan in advance, use our pickup or delivery service,” said Rodney McMullen, CEO of the Kroger Company. In partnering with delivery apps such as Instacart and Ocado – who both offer same-day or next-day delivery options – Kroger is able to meet consumer expectations while still driving e-commerce business through its own websites.

Inventory Management Award – IKEA


Despite a bumpy year due to temporary store closures, IKEA’s profits rose 13% to $2.4 billion for the twelve months ending in August – with sales climbing in September and October. This is due in large part to IKEA’s inventory management strategy. The company avoids waste and inefficiencies caused by the bulk ordering of items that don’t sell as expected by employing “minimum settings” – the lowest number of products that have to be available before a new order can be placed, and “maximum settings” – the highest number of products that can be ordered at once. The result? IKEA is able to maintain a larger inventory which reduces the company’s shipping costs.

Curbside Pickup Award – Best Buy


Many brands have started offering curbside pickup – in whatever way works best for the customer experience. One leading retailer has truly excelled at this order fulfillment option. As the busiest holiday shopping days quickly approach, Best Buy has rolled out a curbside pickup experience that is seamless and safe. You park in a designated area, use the button in your emailed or texted order confirmation to let them know you’ve arrived, then flash an ID and give them your order number when they bring your purchases to your car, and you’re done!

Winning with AI Award – ADIDAS


Adidas extended and optimized their overall cost-to-serve with the help of AI. The team focused on omnichannel capabilities like ‘digital store availability’ where retail inventory is used to augment online availability, in combination with ‘ship-from-store’ which allows adidas to service consumers faster than before by turning their stores into mini distribution centers. The result with the help of AI? Reduced shipping cost per order, increased net sales and full-price sell through, and higher fulfillment reliability by managing capacity and load-balancing demand across the network.

Real-Time Inventory Insights Award – Party City


Gatherings are limited – or obsolete – this year due to the pandemic. However, Party City has managed to increase digital sales in Q3 by 36.0% as a result of implementing new order fulfillment capabilities such as buy online, pickup in-store (BOPIS), curbside pickup and delivery.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Exam Study

To successfully drive these fulfillment initiatives, Party City has real-time insights into their inventory. They’re also one of the first retailers to introduce self-checkout through the Party City mobile app. This provides customers with an enhanced digital shopping experience via the app, allowing them to complete their order transaction and pickup their items without ever contacting a store associate. The newly launched channels became critical to their integrated omnichannel fulfillment strategy and helped covert customers at a rate 75% higher than last year as the result of several major user experience improvements.

Omnichannel Fulfillment Award – Ulta


Ulta was one of the first retailers in the U.S. to offer curbside in a major way – at 350 locations in 38 states. And just recently, the beauty retailer announced a strategic partnership with Target – adding another fulfillment channel to their model. The ability to scale their curbside and BOPIS business this year to all locations across the U.S. was a major undertaking. But the pay-off was big. Ulta saw a 200% rise in online sales during Q2.

Supply Chain Sustainability Award – Eileen Fisher


In a recently published Vogue interview, Eileen Fisher responded to the question of how the sustainable fashion brand is navigating through the pandemic by saying, “Rather than just pollute less or do less harm, we can actually kind of revive the earth through the process of making clothes.” And they did just that! With the increase in loungewear sales generated in early 2020, Eileen Fisher finally hit the market with a long-time request from its shoppers: a sustainable sleepwear collection.

There are countless other innovative stories spun up in the supply chain space this year, and IBM Sterling would love to hear yours. How did you innovate in unexpected ways in 2020? As we head into 2021, we can all use as many positive stories as we can get.

IBM Sterling works with many clients like the ones above as part of our year-round Event Readiness program. The program is designed to help businesses strengthen their technology infrastructure ahead of peak events to help manage disruptions and deliver a seamless customer experience.

Source: ibm.com

Wednesday 6 January 2021

IBM

Designing AI Applications to Treat People with Disabilities Fairly

IBM Exam Prep, IBM Prep, IBM Certification, IBM Guides, IBM Learning

Alan was surprised not to get an interview for a banking management position. He had all the experience and excellent references. The bank’s artificial intelligence (AI) recruitment screening algorithm had not selected him as a potential candidate, but why? Alan is blind and uses specialized software to do his job. Could this have influenced the decision? 

AI solutions must account for everyone. As artificial intelligence becomes pervasive, high profile cases of racial or gender bias have emerged. Discrimination against people with disabilities is a longstanding problem in society. It could be reduced by technology or exacerbated by it. IBM believes we have a responsibility, as technology creators, to ensure our technologies reflect our values and shape lives and society for the better. We participated in the European Commission High Level Expert Group on AI and its Assessment List for Trustworthy AI (ALTAI). The ALTAI provides a checklist of questions for organizations to consider when developing and deploying AI systems, and it emphasizes the importance of access and fair treatment regardless of a person’s abilities or ways of interacting.

Often, challenges in fairness for people with disabilities stem from human failure to fully consider diversity when designing, testing and deploying systems. Standardized processes like the recruitment pre-screening system Alan faced may be based on typical employees, but Alan may be unlike most candidates for this position. If this is not taken into account, there is a risk of systematically excluding people like Alan. To address this risk, we offer ways to develop AI-based applications that treat people with disabilities fairly, by embedding ethics into AI development from the very beginning in our ‘Six Steps to Fairness’. Finally, we present considerations for policymakers about balancing innovation, inclusion and fairness in the presence of rapidly advancing AI-based technologies.

Six steps to fairness


1. Identify risks. Who might be impacted by the proposed application?

◉ Is this an area where people with disabilities have historically experienced discrimination? If so, can the project improve on the past? Identify what specific outcomes there should be, so these can be checked as the project progresses.

◉ Which groups of people might be unable to provide the expected input data (e.g. clear speech), or have data that looks different (e.g. use sign language, use a wheelchair)? How would they be accommodated?

◉ Consider whether some input data might be proxies for disability.

2. Involve stakeholders. Having identified potentially impacted groups, involve them in the design process. Approaches to developing ethical AI applications for persons with disabilities include actively seeking the ongoing involvement of a diverse set of stakeholders (Cutler et al., 2019 – Everyday Ethics for AI) and a diversity of data to work with. It may be useful to define a set of ’outlier’ individuals and include them in the team, following an inclusive design method. These ‘outliers’ are people whose data may look very different from the average person’s data. What defines an outlier depends on the application. For example, in speech recognition, it could be a person with a stutter or a person with slow, slurred speech. Outliers also are people who belong in a group, but whose data look different. For example, Alan may use different software from his peers because it works better with his screen reader technology. By defining outlier individuals up front, the design process can consider, at each stage, what their needs are, whether there are potential harms that need to be avoided, and how to achieve this.

3. Define what it means for this application to be ‘fair’. In many jurisdictions, fair treatment means that the process using the application allows individuals with disabilities to compete on their merits, with reasonable accommodations.Decide how fairness will be measured for the application itself, and also for any AI models used in the application. If different ability groups are identified in the data, group fairness tests can be applied to the model. These tests measure fairness by comparing outcomes between groups. If the difference between the groups is below a threshold, the application is considered to be fair. If group membership is not known, individual fairness metrics can be used to test whether ‘similar’ individuals receive similar outcomes. With the key stakeholders, define the metric for the project as a whole, including accommodations, and use diverse individuals for testing.

4. Plan for outliers. People are wonderfully diverse, and there will always be individuals who are outliers, not represented in the AI model’s training or test data. Design solutions that also can address fairness for small groups and individuals, and support reasonable accommodations. One important step is providing explanations and ways to report errors or appeal decisions. IBM’s AI Explainability 360 toolkit includes ‘local explanation’ algorithms that describe factors influencing an individual decision. With an explanation, users like Alan can gain trust that the system is fair, or steps can be taken to address problems.

5. Test for model bias and mitigate.

◉ Develop a plan for tackling bias in source data in order to avoid perpet­uating previous discriminatory treatment. This could include boosting representation of peo­ple with disabilities, adjusting for bias against specific disability groups, or flagging gaps in data coverage so the limits of the resulting model are explicit.

◉ Bias can come in at any stage of the machine learning pipeline. Where possible, use tools for detecting and mitigating bias during development. IBM’s AI Fairness 360 Toolkit offers many different statistical methods for assessing fairness. These methods require protected attributes, such as disability status, to be well defined in the data. This could be applied within large organizations when scrutinizing promotion practices for fairness, for example.

◉ Test with outliers, using input from key stakeholders. In recruitment and other contexts where candidate disability information is not available, statistical approaches to fairness are less applicable. Testing is essential to understand how robust the solution is for outlier individuals. Measure against the fairness metrics defined previously to ensure the overall solution is acceptable.

6. Build accessible solutions. Design, build and test the solution to be usable by people with diverse abilities, and to support accommodations for individuals. IBM’s Equal Access Toolkit provides detailed guidance and opensource tools to help teams understand and meet accessibility standards, such as the W3C Web Content Accessibility Guidelines (WCAG) 2.1.

By sharing these ‘six steps to fairness’, IBM aims to improve the fairness, accountability and trustworthiness of AI-based applications. Given the diversity of people’s abilities, these must be an integral part of every AI solution lifecycle.

Considerations for policymakers


IBM believes that the purpose of AI is to augment – not replace – human intelligence and human decision-making. We also believe that AI systems must be transparent, robust and explainable. Although the development and deployment of AI are still in their early stages, it is a critical tool whose utility will continue to flourish over time.

IBM Exam Prep, IBM Prep, IBM Certification, IBM Guides, IBM Learning

This is why we call for a risk-based, use-case focused approach to AI regulation. Applying the same rules to all AI applications would not make sense, given its many uses and the outcomes that derive from its use. Thus, we believe that governments and industry must work together to strike an appropriate balance between effective rules that protect the public interest and the need to promote ongoing innovation and experimentation. With such a ‘precision regulation’ approach, we can answer expectations of fairness, accountability and transparency according to the role of the organization and the risk associated with each use of AI. 

We also strongly support the use of processes, when employing AI, that allow for informed and empowered human oversight and intervention. Thus, to the extent that high-risk AI is regulated, we suggest that auditing and enforcement mechanisms focus on evidence that informed human oversight is appropriately established and maintained.

For more than 100 years, diversity, inclusion and equality have been critical to IBM’s culture and values. That legacy, and our continued commitment to advance equity in a global society, have made us leaders in diversity and inclusion. Guided by our values and beliefs, we are proud to foster an environment where every IBMer is able to thrive because of their differences and diverse abilities, not in spite of them. This does not – and should not – change with the introduction and use of AI-based tools and processes. Getting the balance right between fairness, precision regulation, innovation, diversity and inclusion will be an ongoing challenge for policymakers worldwide.

Source: ibm.com

Monday 4 January 2021

IBM Certification Is For You?

IBM Certification, IBM Certifications, IBM, IBM Professional Certification, IBM Professional

The IBM Professional Certification Program is a way to obtain credentials and show your knowledge of IBM technology. IBM Certification programs vary from entry-level to advanced and include many various job roles for IT professionals. The program consists of reading or study material and a test. Once you pass the test, you will take a shiny new professional certification badge.

There are numerous benefits to obtaining IBM certification. Certification shows your capability and expertise in a particular subject - which in turn enables colleagues to recognize potential opportunities to utilize your skillset.

IBM hesitated to grant the client certain access rights and permission in their environment. They did not require the client to accidentally break the background and then be called to fix things. IBM undoubtedly would have been less suspicious had the client had an IBM certified employee to act as admin.

What Is the Meaning of the IBM Certification?

This IBM Certification enables you to demonstrate your proficiency in the latest IBM technology and solutions. We want to help establish that you can perform role-related tasks and activities at a specified competence level. Your support is beneficial if you wish to validate your skills and companies that want to ensure their employees' performance levels.

Who Is the Target Audience for IBM Certification?

IBM Professional Certifications are connected with an individual, not a company or organization. The target audience for certification includes Business Partner Firms, Customers, IBM internal employees, Independent Consultants who sell support, or service IBM products.

The Show is for any IT professional who recommends, sells, supports, and uses IBM products and solutions. You can work with IBM Analytics, IBM Cloud, IBM Security, or IBM Watson, to name a few.

The exams cost US$200 each, but IBM runs regular promotions. When you visit the IBM Information Management training and certification Web page, you will see any current advertising prominently displayed. You can also verify for free while visiting IBM’s Information on Demand conference or the International DB2 Users Group conference.

Each IBM Certification exam includes about 60 to 70 multiple-choice questions. You get credit for every question you answer correctly and must answer approximately 60 percent correctly to pass. You have 90 minutes to finish the exam. These numbers differ slightly by exam.

When you complete all of the program courses, you will earn a Certificate to share with your professional network and unlock access to career support resources to aid you to kickstart your new career. Many Professional Certificates have hiring partners that recognize the Professional Certificate credential, and others can better prepare you for a certification exam.

Key Takeaways from IBM Certification

The IBM Certification was an action-packed few weeks during which a lot of topics were covered, but here are the key takeaways:

  • Data science is a highly flexible, diverse field that attracts people from a wide range of disciplines - learning data science is understanding how to use a set of tools and, more importantly, become comfortable using those tools to answer various questions thinking like a data scientist. Since IBM created the program, it relies heavily on IBM products like Watson and Cloud Services. This program built confidence in my programming abilities as well.
  • When working on real-world projects, you are not going to have your handheld. Often, if you are beginning a project, you are starting with an empty document, and you are responsible for coming up with every line of code required to get to the results you are looking for. The final project in this program was a great primer to what that feels like.
  • With IBM's comprehensive program, you only need the necessary computer skills and a passion for self-learning to succeed. Once you obtain the certifications, you will be armed with skills, experience, and a portfolio of data science projects that will help you begin an exciting career in data science and machine learning. In addition to earning a IBM Professional certificate, you will also have special access to join IBM's Talent Network, and as a member, you will receive all the tools you need to land a job at IBM.

What Is the IBM Certification Retake Policy?

The IBM exam retakes policy states the same certification test can only be taken two times within 30 days. However, if a certification exam is not made on the first attempt, then there is no obligation to wait before retaking the test. And, the candidates should not take the same test more than twice within any 30 days. If candidates pass the exam then, retakes are not permitted.