Thursday 25 February 2021

Exploring quantum spin liquids as a reservoir for atomic-scale electronics

IBM Exam Prep, IBM Preparation, IBM Certification, IBM Learning, IBM Guides, IBM Career

In physics, there is a line of thinking that, if a particular phenomenon is strange enough, surely there’s a way we can put it to good use. It happened with lasers, which seemed like just another exotic quantum phenomenon in the 1960s, until we started using them in the 1980s to play music in high fidelity on compact discs.

Not every quirky physics concept can be as successful as that, but then again you never know until you investigate. Our small team at IBM Research’s Almaden lab, in the Silicon Valley, for example, has spent the past several years studying the basic underlying physics of atoms and their interactions with each other, and how this phenomenon could lead to exciting new technologies.

Our work is exploratory science at the atomic scale, where the magnetism of neighboring atoms can interact in a subtle dance of patterns described by the rules of quantum mechanics. In our new publication “Probing resonating valence bond states in artificial quantum magnets,” recently published in the journal Nature Communications, our team demonstrated that an important class of quantum spin liquids can be built and probed with atomic precision. The team included lead author Kai Yang, who did his postdoctoral work at IBM Research but recently took an associate professor position at the Chinese Academy of Sciences in Beijing.

Diving into spin liquids

To understand the significance of this discovery, you first have to know what a “quantum spin liquid” is. For the uninitiated, it represents a new state of matter whose electron spins remain in perpetual, fluid-like fluctuation, unlike the conventional ferromagnets currently used in magnetic data storage. Ferromagnets are materials in which all the magnetic spins point in the same direction.

Magnetism arises from spin, the property that gives an atom its magnetic field, with a north and a south pole. Groups of magnetic atoms can join together to form traditional magnets, like a simple refrigerator magnet or the magnets that turn electric motors. In a similar way but on a much smaller scale, microscopic magnets store information in magnetic disk drives and magnetic RAM by holding their north pole stably in one direction or another to record a bit of information. Although these magnets are microscopic, they still contain about 100,000 atoms.

Exploring quantum spin liquids as a reservoir for atomic-scale electronics
Four titanium atoms positioned into a square just 1 nm wide.

A spin liquid is “liquid” in the sense that it is a configuration of atoms whose spins don’t fit in a fixed state and are very responsive to what happens around them. The spin on each atom is either up or down, and two atoms will have opposite spins when they are neighbors. If you were to pin one of the atom’s spins in the up direction, for example, the surrounding atoms will flip their own spin in response. Each pair of neighbors competes to maintain this opposite pattern. A delicate balance results, forming a small quantum spin liquid.

The ability to create and probe quantum spin liquids creates some intriguing possibilities. A possible application to technology is that a spin liquid can have spin current without having any charge current—no electrons move, they just flip in place and carry spin. We plan to demonstrate this in future studies, which could help to reveal how information propagates in many-body systems and find applications in quantum spintronics (short for spin electronics).

I’ll use Moore’s Law to add some context: At a high level, Moore’s Law describes how researchers have been able to shrink transistors and memory bits over time using ever cleverer techniques. That’s a top-down approach to improving electronics. We’re more interested in a bottom-up approach — looking at atoms and considering what you could build when you connect them together.

When, or whether, this line of research could lead to a new breed of electronics is still unclear, but it could contribute to a better understanding of the qubits used in quantum computing, including what, exactly, causes them to lose their phase coherence and, thus, their ability to perform calculations.

Nanoscale manipulation

In our new paper, we describe the type of spin liquid in our experiment using a pattern called a “resonance valence bond” (RVB) state, which is relevant to many interesting phenomena in the physics of interacting particles, such as high-temperature superconductivity. To study the RVB quantum liquid, we used a custom scanning tunneling microscope (STM). An STM — which IBM researchers in Switzerland invented, and later won the Nobel prize for, in the 1980s — can see and manipulate atoms at the nanoscale level.

We combined this capability with the technique of spin resonance, which is a high-sensitivity tool to measure magnetic properties. Together these methods allowed us to probe this quantum liquid on an insulating surface in precisely planned and constructed arrangements of atoms. We closely examined the quantum behavior of four interacting titanium atoms (their interaction is the liquid), a step towards answering basic questions in the field of quantum magnetism.

Exploring quantum spin liquids as a reservoir for atomic-scale electronics
Precise arrangements of three or four titanium atoms arranged to show different magnetic properties. The grid of dots shows the lattice of atoms in the underlying surface, a thin film of magnesium oxide.

Usually atoms are studied by placing them on the ultra-smooth surface of a metal that allows the atoms to be moved to new locations by gently tugging on them with the STM’s tip. Moving atoms on the surface of an insulator, however, has been fiendishly difficult.

We spent several weeks figuring out how to place our magnetic titanium atoms with atomic precision by fine tuning the position of the STM tip and the sequence of voltages that we applied. Once we had the recipe to position the atoms, it took us about a week to assemble the square of four atoms that forms the quantum spin liquid. Our work offers a fresh path to understanding and manipulating entanglement in quantum materials having many interacting spins.

The next (not so) big thing


Physicists are excited about spin liquids because they could allow for spintronics with an insulator, a material that doesn’t have electric current. That could be an advantage when designing electronics because you wouldn’t need electric current or conduction that scatters electrons and disrupts an atom’s spin current.

One of our next projects will be to study magnetic molecules. That sort of work gets chemists involved, especially those looking to make electronics at the smallest possible scale. Often those researchers can make molecular magnets but struggle to add spin exactly where they want it. Our hope is that by sharing our fundamental research we can help advance their efforts, and the efforts of others, toward building atomic electronics and other innovations.

Source: ibm.com

Tuesday 23 February 2021

Forrester study: hybrid cloud strategy and the importance of on-premises infrastructure

IBM Exam Prep, IBM Learning, IBM Tutorial and Material, IBM Career, IBM Guides

In continuing times of uncertainty, market change and evolving business needs, ensuring the stability, agility and resilience of your technology environment is an imperative. Organizations that are best able to leverage a future-ready hybrid cloud infrastructure strategy are better positioned to do exactly that – which makes understanding exactly how IT infrastructure strategies are prioritized, developed and implemented key.

In a newly commissioned study conducted by Forrester Consulting on behalf of IBM this topic is explored in depth. Below you’ll find a snapshot of key insights, directly from the surveyed 384 IT decision-makers responsible for the strategy and execution of IT infrastructure environments at enterprises worldwide. One of the key findings is the continued importance of on-premises infrastructure, with 85% of IT decision-makers agreeing it is a critical part of their hybrid cloud strategy.

Future-readying your infrastructure 

Organizational needs can vary greatly, but the research helps identify some common considerations and implications. How are enterprises making strategic decisions about what types of IT infrastructure to use, and for what purposes? A hybrid cloud environment is found to be clearly delivering on openness and flexibility: 89% of IT decision-makers agree open source allows for a more open and flexible hybrid cloud strategy, and 83% agree a hybrid cloud environment leverages open source for greater efficiency and scalability. Nine in 10 (89%) also believe a hybrid cloud environment is able to both easily and securely store and move data and workloads, providing a secure and flexible strategy for today and tomorrow.

The push to public cloud has not stopped investments in on-premises infrastructure. Organizations are leveraging both hybrid cloud strategy and on-premises infrastructure to meet their unique needs during uncertain times. The benefits include greater performance, productivity and resiliency across data-intensive and mission critical workload types, alongside solutions for data residency and addressing continuing security challenges.

◉ Data residency (56%) is the top ranked reason why organizations maintain infrastructure outside of the public cloud, far outpacing 2019 (39%).

Your infrastructure refresh matters 

As may be anticipated in line with a widespread acceleration in digital transformation, over half of the organizations surveyed (54%) plan to expand or upgrade existing infrastructure in the next 12 months, and three in four (75%) plans to increase infrastructure investment in the next two years. But while infrastructure investment is increasing, conversely in terms of infrastructure refresh, these have been significantly delayed and reflect a 15% increase in delay from 2019 findings. This lack of reinvestment leaves organizations vulnerable.

◉ Nearly 9 in 10 (89%) are accelerating digital investments – primarily to support faster delivery of IT projects (60%).

◉ 70% have delayed infrastructure refreshes at least a few times in the last five years or more (61% in 2019).

While postponements are understandable given resource and budget constraints, it creates areas of vulnerability across security (50%), cost (44%), compatibility (39%) and performance (39%) and beyond the scope of the study, additional risks for reputation, trust, loyalty and customer experience too. Neglecting infrastructure refreshes risks becoming a significant pain point for organization’s modernization efforts, reducing the impact of the increased investment in infrastructure. The extent of the infrastructure refresh delays is both concerning and juxtaposed with the strategic focus placed on security by IT decision makers.

IBM Exam Prep, IBM Learning, IBM Tutorial and Material, IBM Career, IBM Guides

◉ 85% anticipate a greater focus on security and compliance and 84% will expect to see an increase in data-sensitive workloads and applications (for example: AI and machine learning).

Key recommendations from the Forrester study

1 – Make yours a hybrid cloud infrastructure strategy.

2 – Keep on premises as part of your hybrid strategy for the foreseeable future.

3 – Manage the mix of public cloud, private cloud and on premises as a holistic whole.

4 – Keep up with on-premises infrastructure refreshes.

Source: ibm.com

Monday 22 February 2021

Leveraging AI on IBM Z and IBM LinuxONE for accelerated insights

IBM AI, IBM Z, IBM LinuxONE, IBM Tutorial and Material, IBM Preparation, IBM Career

Artificial intelligence (AI) is a profoundly transformative technology because of its broad applicability to many use cases. It already impacts our personal lives, and it is changing the way we work and do business. In this blog we’ll examine AI and its role for clients running IBM Z and IBM LinuxONE workloads. We will cover principles of the IBM Z AI strategy and developments underway around IBM Z’s role as a world-class inference platform. We are developing a blog series to describe key elements of AI on IBM Z and how clients can tap into these capabilities for their next-generation AI applications.

Our mission is to provide a comprehensive and consumable AI experience for operationalizing AI on Z, and this includes the goal of building inference capabilities directly into the platform. Following the IBM AI ladder inference is part of the Analyze/Infuse rungs. Inference refers to the point when a model is deployed for production and is used by the application to make business predictions.

IBM’s design goal is to enable low latency inference for time-sensitive work such as in-transaction inference and other real-time or near-real-time workloads. One example is fraud detection; for banks and financial markets, accurate detection of fraud can result in significant savings. IBM is architecting optimizations in software and hardware to meet these low latency goals and to enable clients to integrate AI tightly with IBM Z data and core business applications that reside on Z. These technologies are designed to enable clients to embed AI in their applications with minimal application changes.

Target use cases include time sensitive cases with high transaction volumes and complex models, typically requiring deep learning. In these transactional use cases, a main objective is to reduce latency for faster response time, delivering inference results back to the caller at a high volume and speed.

Train anywhere and deploy on Z

IBM recognizes that the AI training landscape is quite different from the inference one. Training is the playground of data scientists who are focused on improving model accuracy. Data scientists use platforms that may be ideal for training but are not necessarily efficient for deploying models. Our approach enables clients to build and train models on the platform of their choice (including on premises or Z in a hybrid cloud), leveraging any investments they have made. They can then deploy those models to an environment that has transactional and data affinity to the use case – such as transactional processing on Z. That is the heart of our “train anywhere, deploy on Z” strategy.

IBM AI, IBM Z, IBM LinuxONE, IBM Tutorial and Material, IBM Preparation, IBM Career
To enable this strategy, IBM is architecting solutions to enable model portability to Z without requiring additional development efforts for deployment. We are investing in ONNX (Open Neural Network Exchange) technology, a standard format for representing AI models allowing a data scientist to build and train a model in the framework of choice without worrying about the downstream inference implications. To enable deployment of ONNX models, we provide an ONNX model compiler that is optimized for IBM Z. In addition to this, we are optimizing key open-source frameworks such as TensorFlow (and TensorFlow Serving) for use on IBM Z.

To summarize, our mission is to enable clients to easily deploy AI workloads on IBM Z and LinuxONE in order to deliver faster business insights while driving more value to the business. We are enhancing IBM Z as a world-class inference platform. We aim to help clients accelerate deployment of AI on Z by investing in seamless model portability, in integration of AI into Z workloads, and in operationalizing AI with industry leading solutions such as IBM Cloud Pak for Data for more flexibility and choice in hybrid cloud deployments. We will explore several AI technologies in future blog posts around open source, ONNX, and TensorFlow, Cloud Pak for Data and more.

Source: ibm.com

Saturday 20 February 2021

How to measure and reset a qubit in the middle of a circuit execution

IBM Quantum is working to bring the full power of quantum computing into developers’ hands in the next two years via the introduction of dynamic circuits, as highlighted in our recently released Quantum Developer Roadmap. Dynamic circuits are those circuits that allow for a rich interplay between classical and quantum compute capabilities, all within the coherence time of the computation, and will be crucial for the development of error correction and thus fault tolerant quantum computation. However, there are many technical milestones along the way that track progress before we achieve this ultimate goal. Chief among these is the ability to measure and reset a qubit in the middle of a circuit execution, which we have now enabled across the fleet of IBM Quantum systems available via the IBM Cloud.

Also Read: C2010-555: IBM Maximo Asset Management v7.6 Functional Analyst

Measurement is at the very heart of quantum computing. Although often overlooked, high-fidelity measurements allow for classical systems (including us humans) to faithfully extract information from the realm in which quantum computers operate. Measurements typically take place at the end of a quantum circuit, allowing, with repeated executions, one to gather information about the final state of a quantum system in the form of a discrete probability distribution in the computational basis. However, there are distinct computational advantages to being able to measure a qubit in the middle of a computation.

Mid-circuit measurements play two primary roles in computations. First, they can be thought of as Boolean tests for a property of a quantum state before the final measurement takes place. For example, one can ask, mid-circuit, whether a register of qubits is in the plus or minus eigenstate of an operator formed by a tensor product of Pauli operators. Such “stabilizer” measurements form a core component of quantum error correction, signaling the presence of an error to be corrected. Likewise, mid-circuit measurements can be used to validate the state of a quantum computer in the presence of noise, allowing for post-selection of the final measurement outcomes based on the success of one or more sanity checks.

Measurements performed while a computation is in flight can have some other surprising functions, too — like directly influencing the dynamics of the quantum system. If the system is initially prepared in a highly entangled state, then a judicious choice of local measurements can “steer” a computation in a desired direction. For example, we can produce a three-qubit GHZ state and transform it into a Bell-state via an x-basis measurement on one of the three qubits; this would otherwise yield a mixed state if measured in the computational basis. More complex examples include cluster state computation, where the entire computation is imprinted onto the qubit’s state via a sequence of measurements.

Resetting a qubit

Closely related to mid-circuit measurements is the ability to reset a qubit to its ground state at any point in a computation. Many critical applications, such as solving linear systems of equations, make use of auxiliary qubits as working space during a computation. A calculation requires significantly fewer qubits if, once used, we can return a qubit to the ground state with high-fidelity. With system sizes in the range of 100 qubits, space is at a premium in today’s nascent quantum systems, and on-demand reset is necessary for enabling complex applications on near-term hardware. In Figure 1, below, we highlight an example of the quality of the reset operations on IBM Quantum’s current generation of Falcon processors, on the Montreal system, by looking at the error associated with one or more reset operations applied to a random single-qubit initial state.

Quantum Computing, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Learning, IBM Guide, IBM Career

Figure 1: we highlight an example of the quality of the reset operations on IBM Quantum’s current generation of Falcon_r4 processors by looking at the error associated with one or more reset operations applied to a random single-qubit initial state.

Internally, these reset instructions are composed of a mid-circuit measurement followed by an x-gate conditioned on the outcome of the measurement.  These conditional reset operations therefore represent one of IBM Quantum’s first forays into dynamic quantum circuits, alongside our recent results demonstrating an implementation of an iterative phase estimation algorithm. However, while the control techniques necessary for iterative phase estimation are still a research prototype, you can use mid-circuit measurement and conditional resets, today.

We can incorporate both concepts illustrated here into simple examples. First, Figure 2 shows a circuit utilizing both mid-circuit measurements and conditional reset instructions for post-selection and qubit reuse.

Quantum Computing, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Learning, IBM Guide, IBM Career

Figure 2: a circuit utilizing both mid-circuit measurements and conditional reset instructions for post-selection and qubit reuse. 

This circuit first initializes all of the qubits into the ground state, and then prepares qubit 0 (q0) into an unknown state via the application of a random SU(2) unitary.  Next, it projects q0 into the x-basis with eigenvalues 0 or 1 imprinted on q1 indicating if the qubit is left in the |+> (0) or |-> (1) x-basis states. We measure q1, and store the result for later use as a flag qubit for identifying which output states correspond to each eigenvalue. Step 3 of the circuit resets the already-measured q1 to the ground state, and then generates an entangled Bell pair between the two qubits. The Bell pair is either |00>+|11> or |00>-|11> depending on if q0 is in the |+> or |-> state prior to the CNOT gate, respectively. Finally, in order to distinguish these states, we use Hadamard gates to transform the state |00>-|11> to |01>+|10> before measuring.

Figure 3 shows the outcome of executing such a circuit on the seven-qubit IBM Quantum Casablanca system, where we see that that the measurement of the flag qubit value measured before (in bold) correctly tracks the expected Bell states generated at the output. Collecting marginal counts over the flag qubit value indicates the proportion of the initial random q0 state that was in the |+> or |-> state after the projection. For the example considered here, these values are ~16 percent and ~84 percent, respectively. The dominant source of the error in the result is dephasing due to the relatively long (~4㎲) duration of measurements on current generation systems. Future processor revisions will bring faster measurements, reducing the effect of this error.

Quantum Computing, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Learning, IBM Guide, IBM Career

Next, we consider the computational advantages of using reset to reduce the number of qubits needed in a 12-qubit Bernstein-Vazirani problem (Fig. 3). As written, this circuit cannot be implemented directly on an IBM Quantum system, but rather requires the introduction of SWAP gates in order to satisfy the limited connectivity in systems such as our heavy-hex based Falcon and Hummingbird processors. Indeed, compiling this circuit with Qiskit yields a circuit that requires 42 CNOT gates on a heavy-hex lattice.  The fidelity of executing this compiled circuit on the IBM Quantum Kolkata system yields a disappointing 0.007; the output is essentially noise.

Quantum Computing, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Learning, IBM Guide, IBM Career

However, with the ability to measure and reset qubits mid-flight, we can transform any Berstein-Vazirani circuit into a circuit over just two qubits requiring no additional SWAP gates. For the previous example, the corresponding circuit is:

Quantum Computing, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Learning, IBM Guide, IBM Career

And execution on the same system gives a vastly improved fidelity of 0.31; a 400x improvement over the standard implementation. This highlights how, with mid-circuit measurement and reset, it is possible to write compact algorithms with markedly higher fidelity than would otherwise be possible without these dynamic circuit building blocks.

Mid-circuit measurement and conditional reset represent an important first step toward dynamic circuits — and one that you can begin implementing into your quantum circuits as we speak. We’re excited to see what our users can do with this new functionality, while we continue to expand the variety of circuits that our devices can run. We hope you’ll follow along as we implement our development roadmap; we’re working to make the power of dynamic circuits a regular part of quantum computation in just a few years.

Source: ibm.com

Friday 19 February 2021

Introducing the AI chip leading the world in precision scaling

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Certification, IBM Career, IBM Learning

Self-driving cars, text to speech, artificial intelligence (AI) services and delivery drones — just a few obvious applications of AI. To keep fueling the AI gold rush, we’ve been improving the very heart of AI hardware technology: digital AI cores that power deep learning, the key enabler of artificial intelligence.

At IBM Research, we’ve been making strides in adapting to workload complexities of AI systems while streamlining and accelerating performance – by innovating across materials, devices, chip architectures and the entire software stack, bringing closer the next generation AI computational systems with cutting-edge performance and unparalleled energy efficiency.

In a new paper presented at the 2021 International Solid-State Circuits Virtual Conference (ISSCC), our team details the world’s first energy efficient AI chip at the vanguard of low precision training and inference built with 7nm technology. Through its novel design, the AI hardware accelerator chip supports a variety of model types while achieving leading edge power efficiency on all of them.

This chip technology can be scaled and used for many commercial applications — from large-scale model training in the cloud to security and privacy efforts by bringing training closer to the edge and data closer to the source. Such energy efficient AI hardware accelerators could significantly increase compute horsepower, including in hybrid cloud environments, without requiring huge amounts of energy.

AI model sophistication and adoption is quickly expanding, now being used for drug discovery, modernizing legacy IT applications and writing code for new applications. But the rapid evolution of AI model complexity also increases the technology’s energy consumption, and a big issue has been to create sophisticated AI models without growing carbon footprint. Historically, the field has simply accepted that if the computational need is big, so too will be the power needed to fuel it.

But we want to change this approach and develop an entire new class of energy-efficient AI hardware accelerators that will significantly increase compute power without requiring exorbitant energy.

Tackling the problem

Since 2015, we’ve been consistently improving power performance of AI chips, boosting improving power performance by 2.5 times every year. To do so, we’ve been creating algorithmic techniques that enable training and inference without loss of prediction accuracy. We’ve also been developing architectural innovations and chip designs that allow us to build highly efficient compute engines able to execute more complex workloads with high-sustained use and power efficiency. And we’ve been creating a software stack that renders the hardware transparent to the application developer and compatible across hybrid cloud infrastructure, from cloud to edge.

We remain the leaders in driving reduced precision for AI models [Figure 1], with industry-wide adoption. We’ve extended reduced precision formats to 8-bit for training and 4-bits for inference and developed data communication protocols that enable AI cores on a multiple-core chip to exchange data effectively with each other.

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Certification, IBM Career, IBM Learning
Figure 1: IBM Research leadership in reduced precision scaling for AI training and inference. Our AI chip is optimized to perform 8-bit training and 4-bit inference on a broad range of AI models without model accuracy degradation.

Our new ISSCC paper reflects the latest stage in these advancements, focused on the creation of a chip that is highly optimized for low-precision training and inference for all of the different AI model types — without any loss of quality at the application level.

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Certification, IBM Career, IBM Learning
Figure 3: Photo of 4-core AI chip

We showcase several novel characteristics of the chip. To start with, it’s the first silicon chip ever to incorporate ultra-low precision hybrid FP8 (HFP8) formats for training deep-learning models in a state-of-the-art silicon technology node (7 nm EUV-based chip). Also, the raw power efficiency numbers are state of the art across all different precisions. Figure 2 shows that our chip performance and power efficiency exceed other that of dedicated inference and training chips.

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Certification, IBM Career, IBM Learning
Figure 2: Comparison of measurements from this work [Agrawal et al, ISSCC 2021] with other publications

But this is not all. It’s one of the first chips to incorporate power management in AI hardware accelerators. In this research, we show that we can maximize the performance of the chip within its total power budget, by slowing it down during computation phases with high power consumption.

Finally, we demonstrate that our chip, in addition to great peak performance, has high sustained utilization that translates to real application performance and is a key part of engineering our chip for energy efficiency. Our chips routinely achieve more than 80 percent utilization for training and more than 60 percent utilization for inference — as compared to typical GPU utilizations that are typically well below 30 percent utilization.

Our new AI core and chip can be used for many new cloud to edge applications across multiple industries. For instance, they can be used for cloud training of large-scale deep learning models in vision, speech and natural language processing using 8-bit formats (vs. the 16- and 32-bit formats currently used in the industry). They can also be used for cloud inference applications, such as for speech to text AI services, text to speech AI services, NLP services, financial transaction fraud detection and broader deployment of AI models in financial services.

Autonomous vehicles, security cameras and mobile phones can benefit from it too, and it can be handy for federated learning at the edge for customization, privacy, security and compliance.

Source: ibm.com

Wednesday 17 February 2021

Modernize Your IBM Cognos Controller Developer Certification to Grow Your Business Value

IBM Cognos Controller Developer, IBM Cognos Controller Developer Certification, IBM, Cognos Controller Developer, C2020-605

IBM Cognos Controller Developer (C2020-605) exam will certify that the successful candidate has the essential knowledge and skills necessary to set up a Controller application by building account and company structures and set up the concentration processes such as currency conversion, intercompany transactions, and advances in subsidiaries. The successful candidate must also be able to design and generate financial reports used for economic analysis.

IBM Cognos Controller Developer Professional Certificate is worth taking. Cognos Controller is a comprehensive platform for financial and management reporting. It comes with both power and flexibility to guarantee streamlined, best-practice financial consolidation and reporting. Its full suite of capabilities emphasizes finance stakeholders, managers, line-of-business executives, and regulatory bodies. You can consider it the de facto starting point for planning, forecasting, budgeting, and other processes. The on-cloud deployment option will help your organization meet individual IT requirements without worrying about scalability.

The IBM program appears to be providing this missing piece as it attempts to address the shortage of data scientists, Sherman noted. According to IBM, the Cognos Controller Developer certification process includes peer-reviewed project work with certification ultimately awarded through experience-based work reviewed by industry experts.

Why Go through IBM Cognos Controller Developer?

IBM is one of the best tech companies around. In an industry where old means overs two decades, IBM has been working since 1911. Because of the vast amount of addition to computer science over the last century and the quite amount of influence they have had, they have a considerable impact on the IT world. IBM has prepared brand recognition for sales, brand loyalty, and world-class, cutting-edge solutions to today’s tech problems. On top of a long legacy and massive market shares, IBM Cognos Controller Developer professionals are in high demand. Try browsing any job website, and you are bound to find a need for IBM-trained professionals.

Whether it is for higher wages, to be more competitive in the market, or the satisfaction and confidence to be excellent at your job, IBM certifications are designed to push you further and faster in your career. IBM Exams such as the C2020-605 show employers that you are certified and capable of the skills that deserve high wages and responsibilities. IBM offers 11 categories of certifications, but that does not mean they go light on the offerings. There are over 200 certifications offered covering everything from Analytics to the legendary Watson Super Computer. The picture below shows the 11 certification categories that IBM provides.

IBM offers these courses to help familiarize IT professionals with specific IBM systems and solutions to become proficient and better at all things IBM Cognos Controller Developer-related. For those outsides of the IBM ecosystem, all the technical terms can be a bit mystifying. These certifications are still geared toured those that support, sell, or implement IBM products and solutions.

Picking An IBM Cognos Controller Developer Certification, You Love to Get A Job You Love

Picking a category can be challenging, mainly when the classes are as broad as Cognitive Solutions and Watson IoT. It is a significant problem to have, though! There are hundreds of paths IBM allows you to take, but there is little reason to be scared. These paths often lead to big jobs and opportunities. Since there is so much difference in what is allowed, finding an acceptable IBM Cognos Controller Developer certification is as simple as looking at what work you are passionate about and pursuing a related path.

One of the neat things about IBM Cognos Controller Developer certification is that it is relatively straight-forward and often involves a single certification exam. The technical specifications are no slouch though, it may take a lot of work and attempt to pass an exam, but since IBM needs fewer exams with a wide array of options, IBM allows you to tailor-make your career path. Being forewarned and technical knowledge is necessary, though, as picking a certification will require some proficiency with technologies and industry terms. Some categories are also significantly more significant than others. For instance, IBM Analytics and IBM Cloud together currently offer over 170 certifications.

IBM offers a wide range of certifications for beginners and advanced professionals. They also provide separate mastery certs for those business professionals who have vast knowledge and years of experience in a particular field.

Tuesday 16 February 2021

How to win at application modernization on IBM Z

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Preparation, IBM Career

If you want to create new user experiences, improve development processes and unlock business opportunities, modernizing your existing enterprise applications is an important next step in your IT strategy. Application modernization can ease your overall transition to a hybrid cloud environment by introducing the flexibility to develop and run applications in the environment you choose. (If you haven’t already done so, check out our field guide to application modernization, where we’ve outlined some of the most common modernization approaches.)

Whether you’re focusing more on traditional enterprise applications or cloud-native applications, you’ll want to make sure that you are fully leveraging hybrid cloud management capabilities and DevOps automation pipelines for app deployment, configuration and updates.

With a cloud-native approach to modernization, your developers can take advantage of a microservice-based architecture. They can leverage containers and a corresponding container orchestration platform (such as Kubernetes and Red Hat® OpenShift®) to develop once and run applications anywhere — including on premises in your data center or off premises in one or more public clouds.

The benefits of modernizing on an enterprise hybrid cloud platform

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Preparation, IBM Career
At every stage of modernization, you can minimize risk and complexity by working from a platform that lets you develop, run and manage apps and workloads in a consistent way across a hybrid cloud environment. This will help ensure that everything on the app is done in a reliable, repeatable and secure manner, and it will help you to remove barriers to productivity and integration.

To that end, consider working from IBM Z® or IBM LinuxONE as your primary platform for application modernization. On either of these platforms you can continue running your existing apps while connecting them with new cloud-native applications at your own rate and pace, reducing risk and expense. You will also be able to leverage the inherent performance, reliability and security of the platform as you modernize your technology stack. They also provide a foundation for modern container-based apps (for example: web and middleware, cloud and DevOps, modern programming languages and runtimes, databases, analytics and monitoring).

Flexible, efficient utilization. IBM Z and IBM LinuxONE provide three approaches to virtualization to manage spikes and support more workloads per server with: IBM Logical Partitions, IBM z/VM®, and KVM. The advanced capabilities of these hypervisors contribute to the foundation of the typically high utilization achieved by IBM Z and IBM LinuxONE.

More performance from software with fewer servers. Enable 2.3x more containers per core on an IBM z15 LPAR compared to a bare metal1 x86 platform running an identical web server load, and co-locate cloud-native apps with z/OS and Linux virtual machine-based apps and enterprise data to exploit low-latency API connections to business-critical data. This translates into having to use fewer IBM Z and IBM LinuxONE cores to run an equivalent set of applications at comparable throughput levels than on competing platforms.

Co-location of cloud-native applications and mission-critical data. IBM Z and IBM LinuxONE house your enterprise’s mission-critical data. Running Red Hat OpenShift in logical partition adjacent to your z/OS® partitions provides low-latency secure communication to your enterprise data via IBM z/OS Cloud Broker. This provides superior performance due to fewer network hops. It also allows for highly secure communication between your new cloud-native apps and your enterprise data stores since network traffic never has to leave the physical server.

Proven security and resiliency. Utilize the most reliable mainstream server platform — with the only hypervisor among its major competitors — that is certified at the highest level of EAL5+.

Source: ibm.com

Friday 12 February 2021

How a hybrid workforce can save up to 20 hours a month

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Learning, IBM Guides, IBM Certification

How productive would your company employees be if they could save two hours a day on regular tasks?

With the growth and evolution of today’s digital economy, companies face the challenge of managing increasingly complex business processes that involve massive amounts of data. This has also led to repetitive work, like requiring employees to manually perform data-intensive tasks when there are technologies available that could help free their time and automate tasks.

Read More: C2150-606: IBM Security Guardium V10.0 Administration

According to a WorkMarket report, 53 percent of employees believe they could save up to two hours a day by automating tasks; that equates to roughly 20 hours a month. Working on tasks that could easily be automated is probably not the best use of employees’ time, especially if your business is trying to improve productivity or customer service.

How automation and RPA bots can help improve social welfare

Let’s look at Ana, who is a social worker focused on child welfare and is entrusted with the safety and well-being of children. Like most employees, Ana does whatever it takes to get the job done. Her dilemma is that she spends up to 80 percent of her time executing repetitive, administrative tasks, such as typing handwritten notes and forms into agency systems or manually requesting verifications or background checks from external systems. This leaves only around 20 percent for client-facing activities, which is too low to improve long-term client outcomes.

Can automation make an immediate impact on the well-being of our children and improve the efficiency of the child welfare workers charged with their safety? Simply put, the answer is yes.

Social workers can shift focus back on the important work they do with the help of proven automation technologies. By combining automation capabilities or services, such as automating tasks with robotic process automation (RPA) bots, extracting and classifying data from documents and automating decisions can make a significant and positive impact in the entire social services industry. Watch the below video to see how automation creates more time for child welfare workers to focus on helping vulnerable children by automating repetitive administrative work.


As you can see from the above video, Ana is able to offload a number of her repetitive, routine and administrative tasks to a bot, freeing her to spend more time and effort towards improving the lives of children. The intent of bots is to augment human worker roles for optimal work-effort outcomes, not replace them.

How hybrid workforce solutions help bring freedom


In the future of work, a hybrid workforce will emerge. In this hybrid workforce, bots will work seamlessly alongside human counterparts to get work done more efficiently and deliver exceptional experiences to both customers and employees. The hybrid workforce of the future will allow human employees to focus on inherent human strengths (for example, strategy, judgment, creativity and empathy).

We’ve been enabling IBM Cloud Pak for Automation, our automation software platform for digital business, to interoperate with more RPA solutions. This interoperability gives clients greater freedom of choice to execute according to their business objectives. Our newest collaboration is with Blue Prism, a market-leading RPA vendor.

While our customers are increasingly seeking RPA capabilities to complement digital transformation efforts, Blue Prism customers are building out capabilities to surround their RPA initiatives — including artificial intelligence (AI), machine learning, natural language processing, intelligent document processing and business process management.

To enable greater interoperability between automation platforms, IBM and Blue Prism jointly developed API connectors, available on Blue Prism’s Digital Exchange (DX). These API connectors will help customers seamlessly integrate Blue Prism RPA task automation technology with three key IBM Digital Business Automation platform capabilities: Workflow, Data Capture and Decision Management.

This technical collaboration offers clients an automation solution for every style of work. This includes  immediately automating small-scale processes for efficiency and rapid return on investment (ROI), all the way to achieving a larger digital labor outcome through multiple types of automation.

Read the no-hype RPA Buyer’s Guide to learn how you can extend the value of your RPA investment by using an automation platform to establish new ways of working, maximize the expertise of your employees, lower operational costs and improve the experiences for our employees.

Source: ibm.com

Thursday 11 February 2021

Hybrid cloud for accelerating discovery workflows

Hybrid cloud could ultimately enable a new era of discovery, using the best resources available at the right times, no matter the size or complexity of the workload, to maximize performance and speed while maintaining security.

Crises like the COVID-19 pandemic, the need for new materials to address sustainability challenges, and the burgeoning effects of climate change demand more from science and technology than ever before. Organizations of all kinds are seeking solutions. With Project Photoresist, we showed how innovative technology can dramatically accelerate materials discovery by reducing the time it takes to find and synthesize a new molecule from years down to months. We believe even greater acceleration is possible with hybrid cloud.

Read More: C2090-600: IBM DB2 11.1 DBA for LUW

By further accelerating discovery with hybrid cloud, we may be able to answer other urgent questions. Can we design molecules to pull carbon dioxide out of smog? Could any of the drugs we’ve developed for other purposes help fight COVID-19? Where might the next pandemic-causing germ originate? Beyond the immediate impacts, how will these crises—and the ways we might respond—affect supply chains? Human resource management? Energy costs? Innovation?

Rather than guessing at solutions, we seek them through rigorous experimentation: asking questions, testing hypotheses, and analyzing the findings in context with all that we already know.

Increasingly, enterprises are seeing value in applying the same discovery-driven approach to build knowledge, inform decisions, and create opportunities within their businesses.

Fueling these pursuits across domains is a rich lode of computing power and resources. Data and artificial intelligence (AI) are being used in new ways. The limitless nature of the cloud is transforming high-performance computing, and advances in quantum and other infrastructure are enabling algorithmic innovations to achieve new levels of impact. By uniting these resources, hybrid cloud could offer a platform to accelerate and scale discovery to tackle the toughest challenges.

Take the problem of semiconductor chips and the pressing demands on their size, performance, and environmental safety. It can take years and millions of dollars to find better photoresist materials for manufacturing chips. With hybrid cloud, we are designing a new workflow for photoresist discovery that integrates deep search tools, data from thousands of papers and patents, AI-enriched simulation, high-performance computing systems and, in the future, quantum computers to build AI models that automatically suggest potential new photoresist materials. Human experts then evaluate the candidates, and their top choices are synthesized in AI-driven automated labs. In late 2020, we ran the process for the first time and produced a new candidate photoresist material 2–3x faster than before.

To scale these gains more broadly, we need to make it easier to build and execute workflows like this one. This means overcoming barriers on several fronts.

Today, jobs must be configured manually, which entails significant overhead. The data we need to analyze are often stored in multiple places and formats, requiring more time and effort to reconcile. Each component in a workflow has unique requirements and must be mapped to the right infrastructure. And the virtualized infrastructure we use may add latency.

We envision a new hybrid cloud platform where users can easily scale their jobs; access and govern data in multiple cloud environments; capitalize on more sophisticated scheduling approaches; and enjoy agility, performance, and security at scale. The technologies outlined in the following sections are illustrative of the kinds of full-stack innovation we believe will usher in these new workloads to the hybrid cloud era.

Reducing overhead with distributed, serverless computing

Scientists and developers alike should be able to define a job using their preferred language or framework, test their code locally, and run that job at whatever scale they require  with minimal configuration. They need a simplified programming interface that enables distributed, serverless computing at scale.

Many computationally intensive workloads involve large-scale data analytics, frequently using techniques like MapReduce. Data analytics engines like Spark and Hadoop have long been the tools of choice for these use cases, but they require significant user effort to configure and manage them. As shown in Figure 1, we are extending serverless computing to handle data analytics at scale with an open-source project called Lithops (Lightweight Optimized Serverless Processing) running on a serverless platform like IBM Code Engine. This approach lets users focus on defining the job they want to run in an intuitive manner while the underlying serverless platform executes: deploying it at massive scale, monitoring execution, returning results, and much more.

IBM Hybrid Cloud, IBM Tutorial and Material, IBM Certification, IBM Exam Prep, IBM Preparation, IBM Career
Figure 1. A serverless user experience for discovery in the hybrid cloud (with MapReduce example) 

A team at the European Molecular Biology Laboratory recently used this combination to transform a key discovery workflow into the serverless paradigm. The team was looking to discover the role of small molecules or metabolites in health and disease and developed an approach to glean insights from tumor images. A key challenge they faced using Spark was the need to define in advance the resources they would need at every step. But by using a decentralized, serverless approach, with Lithops and what is now IBM Code Engine, the platform solved this problem for them by dynamically adjusting compute resources during data processing, which allows them to achieve greater scale and faster time to value. This solution enables them to process datasets that were previously out of reach, with no additional overhead, increasing the data processing pipeline by an estimated ~30x and speeding time to value by roughly 4–6x.

Meshing data without moving it


Organizations often leverage multiple data sources, scattered across various computing environments, to unearth the knowledge they need. Today, this usually entails copying all the relevant data to one location, which not only is resource-intensive but also introduces security and compliance concerns arising from the numerous copies of data being created and the lack of visibility and control over how it is used and stored. To speed discovery, we need a way to leverage fragmented data without copying it many times over.

To solve this problem, we took inspiration from the Istio open-source project, which devised mechanisms to connect, control, secure, and observe containers running in Kubernetes. This service mesh offers a simple way to achieve agility, security, and governance for containers at the same time. We applied similar concepts to unifying access and governance of data. The Mesh for Data open-source project allows applications to access information in various cloud environments—without formally copying it—and enables data security and governance policies to be enforced and updated in real time should the need arise (see Figure 2 for more detail). We open-sourced Mesh for Data in September 2020 and believe it is a promising approach to unify data access and governance across the siloed data estate in most organizations.

IBM Hybrid Cloud, IBM Tutorial and Material, IBM Certification, IBM Exam Prep, IBM Preparation, IBM Career
Figure 2. A unifying data fabric intermediates use and governance of enterprise data without copying it from its original location

Optimizing infrastructure and workflow management


Today’s most advanced discovery workflows frequently include several complex components with unique requirements for compute intensity, data volume, persistence, security, hardware, and even physical location. For top performance, the components need to be mapped to infrastructure that accommodates their resource requirements and executed in close coordination. Some components may need to be scheduled “close by” in the data center (if significant communication is required) or scheduled in a group (e.g., in all-or-nothing mode). More sophisticated management of infrastructure and workflow can deliver cost-performance advantages to accelerate discovery.

We have been innovating on scheduling algorithms and approaches for many years, including work on adaptive bin packing and topology-aware scheduling approaches. We also worked closely with the Lawrence Livermore National Laboratory, and more recently Red Hat, on the next-generation Flux scheduler, which has been used to run leading discovery workflows with more than 108 calculations per second.

Bringing capabilities like these to the Kubernetes world will truly enable a new era of discovery in the hybrid cloud. Although the Kubernetes scheduler was not designed or optimized for these use cases, it is flexible enough to be extended with new approaches. Scheduling innovations that address the complex infrastructure and coordination needs of discovery workflows are essential to delivering advances in performance and efficiency and, eventually, optimizing infrastructure utilization across cloud environments (see Figure 3).

IBM Hybrid Cloud, IBM Tutorial and Material, IBM Certification, IBM Exam Prep, IBM Preparation, IBM Career
Figure 3. Evolve Kubernetes scheduling for discovery workflows.

Lightweight but secure virtualization


Each task in a discovery workflow is ultimately run on virtualized infrastructure, such as virtual machines (VMs) or containers. Users want to spin up a virtualized environment, run the task, and then tear it down as fast as possible. But speed and performance are not the only criteria, especially for enterprise users: security matters, too. Enterprises often run their tasks inside of VMs for this reason. Whereas containers are far more lightweight, they are also considered less secure because they share certain functionality in the Linux kernel. VMs, on the other hand, do not share kernel functionality, so they are more secure but must carry with them everything that might be needed to run a task. This “baggage” makes them less agile than containers. We are on the hunt for more lightweight virtualization technology with the agility of containers and the security of VMs.

A first step in this direction is a technology called Kata containers, which we are exploring and optimizing in partnership with Red Hat. Kata containers are containers that run inside a VM that is trimmed down to remove code that is unlikely to be needed. We’ve demonstrated that this technology can reduce startup time by 4–17x, depending on the configuration. Most of the cuts come from “QEMU,” a part of the operating system responsible for emulating all the infrastructure the VM might need to complete a task. The key is to strike a balance between agility and generality, trimming as much as possible without compromising QEMU’s ability to run a task.

Bringing quantum to the hybrid cloud


Quantum computing can transform AI, chemistry, computational finance, optimization, and other domains. In our recently released IBM Quantum Development Roadmap, we articulate a clear path to a future that can use cloud-native quantum services in applications such as discovery.

The basic unit of work for a quantum computer is a “circuit.” These are combined with classical cloud-based resources to implement new algorithms, reusable software libraries, and pre-built application services. IBM first put a quantum computer on the cloud in 2016, and since then, developers and clients have run over 700 billion circuits on our hardware.

In the Development Roadmap, we focus on three system attributes:

◉ Quality – How well are circuits implemented in quantum systems?

◉ Capacity – How fast can circuits run on quantum systems?

◉ Variety – What kind of circuits can developers implement for quantum systems?

We will deliver these through scaling our hardware and increasing Quantum Volume, as well as

◉ The Qiskit Runtime to run quantum applications 100x faster on the IBM Cloud,

◉ Dynamic circuits to bring real-time classical computing to quantum circuits, improving accuracy and reducing required resources, and

◉ Qiskit application modules to lay the foundation for quantum model services and frictionless quantum workflows.

This roadmap and our work are driven by working closely with our clients. Their needs determine the application modules that the community and we are building. Those modules will require dynamic circuits and the Qiskit Runtime to solve their problems as quickly, accurately, and efficiently as possible. These will provide significant computational power and flexibility for discovery.

We envision a future of quantum computing that doesn’t require learning a new programming language and running code separately on a new device. Instead, we see a future with quantum easily integrated into a typical computing workflow, just like a graphics processors or any other external computing component. Any developer will be able to work seamlessly within the same integrated IBM Cloud-based framework. (See Figure 4.)

We call this vision frictionless quantum computing, and it’s a key element of our Development Roadmap.

IBM Hybrid Cloud, IBM Tutorial and Material, IBM Certification, IBM Exam Prep, IBM Preparation, IBM Career
Figure 4. Integration of quantum systems into the hybrid cloud.

Source: ibm.com

Tuesday 9 February 2021

Lowering TCO with Linux on IBM Power Systems

IBM Power System, IBM Tutorial and Material, IBM Learning, IBM Preparation, IBM Guides, IBM Career

Who could have predicted the success that Linux® would achieve when Linus Torvalds introduced its first release in 1991? Indeed, it could be argued that Linux has become the most popular operating system on the planet given that it runs on virtually every compute platform in use today. Its ubiquitous and portable nature enables organizations everywhere and of every size to leverage open standards and open-source community collaboration while exploiting architecture-specific attributes. Increasingly, organizations are choosing the IBM Power Systems™ platform over x86 to run their enterprise Linux workloads to gain dramatic IT cost savings.

Doing more with fewer cores

Both IBM® Power® processors and x86-based servers have made performance improvements over time. However, data from multiple sources show that while x86 servers may have increased in overall size, capacity, and system performance, the per-core performance of x86 multicore CPU offerings has remained relatively flat. In contrast, the IBM Power processor has increased its per-core performance by 35% on average with each new generation or technology release. For many Linux-based software packages, subscription and support licensing is typically priced per core (or socket). Reducing the number of required cores to run those packages can significantly decrease software costs. For a large European telecommunications company comparing Linux web application and database workloads on Skylake x86 blades and Linux on an IBM Power System E980 server, our analysis found that for every IBM POWER9™ core, the x86 solution required 10 Intel® Xeon Skylake cores. This core differential for the Power E980 solution would save the company $7 million over five years with 74% of this savings based on reductions in core-based licensing subscription and support costs for the systems software, web and database (Figure 1).

IBM Power System, IBM Tutorial and Material, IBM Learning, IBM Preparation, IBM Guides, IBM Career
Figure 1

Floor space and electricity


Another advantage of Linux on Power and its capability to do more work with fewer resources is observed when examining space and electrical costs. A large genomics research facility was able to reduce its physical footprint from 89 Linux on x86 servers to four Linux on IBM Power System AC922 NVLink-enabled GPU systems. The space savings was 1,586 square feet or roughly 340 rack units, while annual electrical usage was reduced by 558,000 kilowatt hours, equating to an annual cost savings of $500,000.

Longer lifecycles, fewer refreshes


The Power architecture supports superior performance and lifecycle longevity for Linux workloads. Assessments performed by the IBM IT Economics team found that most Power users refresh their servers once every four to five years while the customary refresh cycle for x86 users is once every three to four years. Over 10 years, that translates to a 33% lower refresh cycle for Linux on Power than for Linux on x86. This results in decreased business disruptions and technology change-out costs such as planned outages, systems administration and labor, temporary parallel operation and large step increases in software and hardware maintenance (Figure 2).

IBM Power System, IBM Tutorial and Material, IBM Learning, IBM Preparation, IBM Guides, IBM Career
Figure 2

In the 2019 ITIC Global Reliability Survey, 88% of survey respondents indicated that the cost of a single hour of downtime now exceeds $300,000 ($5,000 per minute). For an IT department refreshing just 10 servers within 60 minutes, that cost of replacement would be $3 million. Linux on Power has a 33% lower refresh cycle, which could save an IT shop $1 million or more in refresh costs.

Reliability and recoverability


Linux on IBM POWER9 leverages unique underlying hardware and virtualization capabilities to provide a more secure, reliable and recoverable environment than x86. Linux workloads on Power can take advantage of IBM Power Systems Enterprise Pools, Capacity on Demand (CoD) and Live Partition Mobility (LPM) to deliver 24×7 availability. These Power processor-specific features enable compute resources to be efficiently managed and rerouted based on changing business needs without incurring the cost or overhead of x86 disaster recovery implementations that require dedicated (yet often idle) compute resources.

All systems are at risk of security threats and implementation vulnerabilities. The IBM Power processor minimizes these threats with the highest level of security in the industry by using the same security design principles with Linux on Power as with IBM Z®. The POWER9 systems also use accelerated encryption built into the chip so that data is protected in motion and at rest. IBM PowerVM®, the underlying, firmware-based virtualization layer that’s standard with POWER9 systems, has zero reported security vulnerabilities, according to the U.S. government’s National Vulnerability Database (NVD). VMware®, a common hypervisor for Linux on x86, had 188 exposures reported on the NVD database over the last three years alone. According to the ITIC survey mentioned above, in 2018, Linux on Power users experienced a maximum of two minutes of unplanned downtime per server per year, or essentially 99.9996% uptime. Linux on x86 users experienced anywhere from 2.1 to 47 minutes of unplanned downtime per server, per year within the same time frame. This equates to as much as a 235% advantage for Linux on Power in terms of unplanned downtime, or $235,000 in savings per server, per year.

Ready your workloads for hybrid and multicloud


For cloud computing users, IBM Linux on Power is leading the way for mission-critical applications on hybrid and multicloud environments with the IBM acquisition of Red Hat®. Red Hat supercharges IBM Linux on Power capabilities with the addition of the Red Hat OpenShift® family of container software development and management tools. Recognition within the cloud and open-source communities using Red Hat has enabled IBM to provide integrated Power processor-based cloud offerings and IBM Cloud Paks® with notable cost savings. An example is SAP HANA® on IBM Power Systems virtualized in the IBM Cloud®. This Linux on Power cloud solution provides the benefits of running mission-critical SAP HANA on Power while tapping into the flexibility, reliability, security and performance advantages of Power to reduce IT costs.

An IBM IT Economics cost analysis for a large managed IT service provider in Latin America found cumulative cost savings over five years of running SAP HANA on Linux on Power versus Linux on x86. For this provider, the largest savings are due to significantly lower costs for networking, storage and compute hardware. In a separate analysis, Forrester® found that the average Power user running SAP HANA on Linux could save $3.5 million over a three-year period compared to Linux on an alternative hardware platform such as x86.

The top contender


When making a platform selection to host Linux workloads, examine the technical and financial benefits of IBM Power Systems. For many organizations, Linux on Power is the top contender for the job.

Source: ibm.com

Saturday 6 February 2021

Labeyrie Fine Foods picks IBM Power Systems Virtual Server

IBM Study Material, IBM Learning, IBM Tutorial and Material, IBM Preparation, IBM Career

Then your promise is to deliver premium foods across the globe, your IT infrastructure must be as reliable as your products and as agile as your supply chain. That’s why in November 2019 Labeyrie Fine Foods Group, international purveyor of food products, chose to add IBM Power Systems Virtual Server to their tech stack.

Founded in southwestern France in 1946, Labeyrie Fine Foods aims to be THE world benchmark for premium, trendy and responsibly sourced products (seafoods, regional products and aperitifs) made for sharing. It stays ahead of the curve in anticipating clients’ and consumers’ growing expectation of products from sustainable value chains, especially organically farmed products that contain no artificial ingredients.

After a first project based on the IBM Food Trust blockchain solution to enhance the traceability of its smoked salmon that was successfully implemented at the end of 2019, Labeyrie has taken a second step with the project to migrate its JD Edwards (JDE) ERP to IBM Power Systems™ Virtual Server. Labeyrie was able to migrate existing on-premises workloads easily by leveraging an identical technology stack as was designed for Power Systems.

The JDE ERP is a mission-critical workload for Labeyrie that serves as the backbone of their business and ensures the smooth running of Labeyrie’s management operations such as its supply chain.

Labeyrie naturally chose to partner with IBM on their ERP modernization project because of its position as the leading global and European integrator of Oracle’s JDE solutions and its expertise in 20 industrial sectors, including the food industry. In addition, IBM Power Systems Virtual Server for AIX® environments enabled Labeyrie to host their workloads in a European data center which helped them to stay compliant.

IBM Study Material, IBM Learning, IBM Tutorial and Material, IBM Preparation, IBM Career
Integrating IBM Power Systems Virtual Server to their tech stack will allow Labeyrie to:

◉ Take advantage of the contributions of cloud computing in terms of operational and financial performance, with the possibility of benefiting from hourly billable cloud resources for specific needs

◉ Better respond to business lines’ expectations by refocusing IT on value-added tasks such as integrating new products in its traceability project

◉ Leverage IBM’s team of subject matter experts in private infrastructure as a service and cloud adoption technologies

◉ Secure the quality of services over time thanks to an IAAS model offering a constantly evolving catalog of services

Now with the ability to manage its Power Systems Virtual Server for AIX in one of the IBM European data centers, Labeyrie is able to access the enterprise-class POWER architecture in a flexible way in the cloud. Labeyrie now has the compute power and agility it needs to modernize and adapt to the requests and assure the best quality of service to its users and end customers.

Source: ibm.com