Friday 30 July 2021

Sophisticated inventory promising: Your superpower for next-generation omnichannel experiences

IBM Exam Study Material, IBM Career, IBM Tutorial and Material, IBM Guides, IBM Career, IBM Prep, IBM Learning

Customer-centricity is the heart of every successful retailer. But as customers’ expectations quickly evolve, retailers are challenged with increasing sales and getting product out the door as quickly as possible – sometimes foregoing profitability and efficiency. As a result, a strategy initially designed to put the customer first, can often lead to over-promising and result in disappointed shoppers facing late deliveries or cancelled orders.

Read More: C2150-606: IBM Security Guardium V10.0 Administration

Fortunately, it is possible to make promises that are reliable and profitable. And there’s no time to waste.

Why now?

Retailers have prioritized new omnichannel operations to meet customers’ needs, with just over 60% of retailers reporting that they are currently implementing curbside pick-up or BOPIS solutions, or planning to in the next 12 months. But in the rush to introduce seamless customer experiences and manage omnichannel complexity, many retailers haven’t considered their approach to promising and how it directly impacts customer experiences. In fact, 35% of retailers don’t have a well-defined omnichannel fulfillment strategy in place. Currently, retailers rely on what the system says is available for inventory and then calculate the shipping time to present a promise date. But without real-time inventory visibility, they can’t see what is actually available before they promise it to customers. Aside from inventory, there are many other nuances to consider before retailers can make profitable and accurate promises — labor and throughput capacity to prepare orders, node and carrier schedules, as well as cost factors and constraints.

Every retailer wants to increase sales and get products into shoppers’ hands faster. However, if you can’t do this in a way that is efficient and reliable for customers, revenue and profitability suffer in the long term. Visibility and trust throughout the process are critical to every shopper experience. Customers often face disappointment when inventory they believed to be in stock is really unavailable or backordered at checkout. In fact, roughly 47% of shoppers will shop elsewhere if they cannot see inventory availability before they buy. Providing customers with early into estimated delivery dates is also critical; nearly 50% of shoppers would cancel their cart due to a mismatch between their expectations for the delivery date and the actual delivery date.

What’s changed?

Retailers need a modern solution that brings together real-time inventory visibility and advanced fulfillment logic to balance profitability and customer experience. That’s why we are proud to announce IBM Sterling Intelligent Promising, a solution designed to empower retailers with advanced inventory promising for next-generation omnichannel experiences, from discovery to delivery. Retailers can preserve brand trust by providing shoppers with greater certainty, choice, and transparency across their buying journey. And, this solution empowers retailers to increase digital conversions and in-store sales while achieving near-perfect delivery time SLA compliance, a reduction in split packages and shipping costs, and an improvement in inventory turns.

What’s the impact?

IBM Sterling Intelligent Promising is your superpower to enhance the shopping experience by improving certainty, choice and transparency across the customer journey.

➥ Certainty. With real-time inventory visibility and advanced, cost-based optimization, you can deliver accurate promise dates throughout the order process – on the product list page, product details page and during checkout – so that customers know exactly when an order is arriving, with no surprises. When customers can see what is truly available with reliable delivery dates, retailers can reduce order cancellations, improve cart conversion, and feel confident about their bottom line.

➥ Choice. Empower shoppers to find and filter by store and products based on availability, delivery date and fulfillment method. Make enterprise-wide inventory available to promise based on easy to configure  business rules, enabling a faster delivery experience with more options for customers.

➥ Transparency. Provide clarity when items are low on inventory or backordered. With inventory intelligence and data from carriers and other processing and fulfillment factors,, retailers can manage pre-purchase expectations for any uncertainty around on time, in full delivery due to potential supply chain disruptions.

Improve digital and in-store conversions. If shoppers can’t see available inventory online, they’ll shop elsewhere. Improve conversion by providing an accurate inventory view up-front. Go the extra mile by proactively suggesting related products as an upsell opportunity without changing shipping, labor costs or delivery dates. Optimization logic influences promising across the order journey and supports even the most complex scenarios such as orders that include third-party services (like furniture assembly) and multi-brand orders.

IBM Exam Study Material, IBM Career, IBM Tutorial and Material, IBM Guides, IBM Career, IBM Prep, IBM Learning
Increase omnichannel revenue and profitability. As shopper expectations rise and omnichannel becomes increasingly complex, retailers need to be able to optimize fulfillment at scale – across thousands of fulfillment permutations in milliseconds. Advanced analytics and Artificial Intelligence (AI) are essential to drive multi-objective optimization against configured business goals. Detailed “decision explainers” ensure that business users trust the recommendations, driving continuous learning and improving adoption. With this foundation in place, start making promises you can keep, while improving sales and profitability.

Early in the customer journey, prioritize business objectives like profitability and customer satisfaction while considering different fulfillment factors (distance, costs, capacity, schedules, special handling requirements) so you can determine the lowest cost for each customer in seconds. For increased accuracy, utilize aggregated data from carriers like shipping rates, transit times, and even cross border fees. Post-purchase, optimize cost-to-serve by combining shipping costs directly from carrier management providers and capacity across locations and resource pools, with predefined rules and business drivers. Use AI capabilities to predict sell-through patterns and forecast demand based on seasonality, reducing stockouts and markdowns to maximize revenue and improve margins.

As omnichannel continues to mature, make promises that are reliable and profitable. Start creating next-generation omnichannel experiences with sophisticated inventory promising. Get more details on how IBM Sterling Intelligent Promising can help you enhance the shopping experience and improve digital and in-store sales while increasing omnichannel conversions profitability.

Source: ibm.com

Thursday 29 July 2021

Top 3 Data Job Roles Explained : A Career Guide

Data is the world`s most valuable resource!

Data is not recent, but it is growing at an incredible rate. The increasing interactions between data, algorithms, and analytics of big data, connected data and individuals are opening enormous new prospects. Enterprises and even economies have now started developing products and services based on data-driven analogies. The ability to provide an agile environment to serve the data workload is critical with data powering so many innovative approaches, whether it be artificial intelligence, machine learning or deep learning. Data undoubtedly offers them the chance to enhance or redesign almost every part of their business model.

Engineers, researchers, and marketers of today could be the data scientists of tomorrow

According to data gathered by LinkedIn, Coursera and the World Economic Forum in the Future of Jobs Report 2020, it’s estimated that, by 2025, 85 million jobs may be displaced by a shift in the division of labor between humans and machines. Roles growing in demand include data analysts and scientists, AI and machine learning specialists, robotics engineers, software and application developers, and digital transformation specialists.

Top cross-cutting, specialized skills of the future

IBM Guides, IBM Career, IBM Tutorial and Material, IBM Prep, IBM Learning, IBM Certification, IBM Preparation

Data and AI skills are fast becoming essential digital skills required across all business disciplines.

Jobs of tomorrow


Roles growing in demand include data analysts and scientists, AI and machine learning specialists, robotics engineers, software and application developers, and digital transformation specialists.


IBM Guides, IBM Career, IBM Tutorial and Material, IBM Prep, IBM Learning, IBM Certification, IBM Preparation

Key roles in the Data Ecosystem


The lure of leveraging data for competitive edge is transforming organizations to become more data driven in their operations and decisions, it has resulted in an enormous job opportunity for various data related professions. The key data roles include Data Engineering, Data Analytics and Data Science.

Data Engineering entails managing data throughout its lifecycle and includes the tasks of designing, building, and maintaining data infrastructures. These data infrastructures can include databases – relational and NoSQL, Big Data repositories and processing engines – such as Hadoop and Spark, as well as data pipelines – for transforming and moving data between these data platforms.

Data Analytics involves finding the right data in these data systems, cleaning it for the purposes of the required analysis, and mining data to create reports and visualizations.

Data Scientists take Data Analytics even further by performing deeper analysis on the data and developing predictive models to solve more complex data problems.

A Deep Dive: Skills Needed for Data Professions


In terms of skills required to perform each of these roles, while there are some unique skills for each job role, there also some common skills that all data professionals need, however the level of proficiency required may vary.
IBM and Coursera hosted a very engaging webinar with 4 IBM Subject Matter Experts and discussed each of these roles in depth. The replay of the session can be found here:


Your burning Data career questions answered


We received over 300 questions before and during the webinar. Here are some of the most common questions received.

Q1. For someone with no work experience and a fresh graduate, which one is better Data Science or Data Analyst? Also, what advice would you give me to make my resume stand-out since I have no experience.

Both courses are good, but it would be great if you could start with one and then move onto the other one. In terms of making a start with no work-experience that’s a tough one. As a hiring manager, I tend to look for a portfolio in the form of:

1. School or side projects done during school
2. Work done on github with an emphasis on project summary, how clean the code in the notebooks is, what data modeling and visualization techniques have been applied

Basically, pick a passion project and create a portfolio based on it. Also, establish some presence by participating on relevant online forums, attend meetups, compete in hackathons, contribute to open source projects and submit proposals to trade journals and conferences. Finally, round out your resume with a diverse set of verifiable technical and non-technical skills.

Q2. I am mid-career with 10-15 years of experience looking to transition to a role in data. I have taken several online courses and earned badges. However, I am not being given option to pivot into DS roles due to lack of real-life experience. I also don’t want to start from scratch as a rookie. What advice do you have for me?

For these data roles in addition to technical skills and foundational mathematics you also need domain expertise. Since you’ve invested so many years in your career and have deep domain knowledge in your area, you should try to find jobs within related industries.  That way you will not be starting from scratch and be able to leverage your existing skills.

Q3. To get a holistic view of Data science is the knowledge of Data engineering essential. Are Data engineers more technical as compared to Data scientist or Data analytics?

Data engineering is not essential to get a holistic view of Data Science. The key skill that you need as a Data Engineer is a good base knowledge of SQL and the ability to work with databases. You also need to have some basic foundational concepts and knowledge about how data systems work but otherwise the fields are independent. As a data scientist you’ll be working with data engineers and other stakeholders in the corporation, but you don’t necessarily need to have data engineering skills. The truth of the matter is that there’s always the opportunity to start in one role and evolve into another by expanding your knowledge and gaining experiences as you go along.

Q4. I’ve been learning python and have a decent grasp of the basics, but I haven’t tried using any libraries/packages. I also only have basic excel skills. What should I learn next to be able to start applying to data?

You should start learning the basic data science libraries like NumPy and Pandas and try to complete a whole data science pipeline. Load a data set, process the data, do some summary statistics, visualize it and then create a machine learning model. You should also learn to use a Jupyter notebook.

Q5. When it comes to data, there’s so many languages and disciplines to learn such as Python, SQL, R, RPA. Do you suggest learning a little bit of everything or specializing in one or two languages?

SQL is mandatory. Once you’ve mastered SQL, pick either Python or R and see which one you are more comfortable with and stick with it. Then, even if you need to use a different programming language, making the transition will be much simpler as the constructs are basically the same, the nuances and syntax may be different. Master one language and learn how to apply it.

Q6. How are the courses on Coursera going to help me get ready for an entry level job as a Data Engineer?

We’ve designed the program to prepare you for the entry level role in data engineering by not only have teaching you theory but also by applying the concepts learned in hands-on labs and projects.  Every course in the Data Engineering program (and Data Analytics and Data Science) have several hands-on labs, projects and provide exposure to many data sets.  So, by the end of these programs you will have access to a vast variety of data sets and several tools that data engineers use and apply. The projects leverage real databases and have you practicing with RDBMSes, Data Warehouses, NoSQL,big data, Hadoop, Spark etc.

Q7. Will Coursera help me get a job after I have completed a professional certificate? How can I get a job at IBM?

All Professional Certificate completers get access to several career support resources to help them reach their career objectives. They also get access to the Professional Certificate community for peer support and the ability to network with alumni who have successfully made a career change.


IBM Guides, IBM Career, IBM Tutorial and Material, IBM Prep, IBM Learning, IBM Certification, IBM Preparation

Q8. Which courses will help me prepare for the 3 data roles discussed – Data Engineer, Data Analyst and Data Scientist?

There are numerous data courses available in English, Spanish, Arabic, Russian and Brazilian Portuguese. The programs aligned with the 3 job roles discussed in this article are:


Source: ibm.com

Saturday 24 July 2021

What is BOPIS in retail?

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Preparation, IBM Learning, IBM Guides, IBM Career

By the end of summer 2020, the share of retailers offering these omnichannel options jumped to 44%. Leading brands like Sally Beauty launched BOPIS capabilities nationwide last year as part of its ongoing digital transformation. “BOPIS is one of the ways Sally Beauty is working to connect customers’ in-store and online experiences, by offering ways to shop that are comparable to big-box retailers, with the added benefit of accessibility,” says Sally Beauty President, John Goss.

What is BOPIS?

BOPIS is a retail sales strategy that has developed with the growth of eCommerce. It is an omnichannel retail tactic that offers shoppers the convenience of ordering and buying online or on their mobile device, and the ability to quickly pick up their purchase at a local store.

How does BOPIS strategy work?

BOPIS speeds order fulfillment, letting shoppers pick up their purchases much more quickly than if they were to opt for home delivery. Shoppers can have their purchases in a matter of hours, compared to several days or more. BOPIS can also speed up a shopper’s in-store experience if that is what they are seeking. BOPIS strategy boils down to improving the customer experience. Additionally, BOPIS can often drive in-store traffic and additional purchases with one study reporting that approximately 50% of all shoppers make additional purchases when picking up in store.

BOPIS strategy connects the online and in-person retail experiences – supporting a stronger omnichannel customer journey. It allows retailers to differentiate themselves and become more competitive. Ultimately, BOPIS empowers retailers to meet customer demands, lower shipping costs, and even up-sell/cross-sell products and services as shoppers enter the store.

What are the benefits and advantages of a BOPIS strategy in retail?

BOPIS has become an essential component of a modern omnichannel retail strategy. Let’s explore some of the benefits in detail:

Customer satisfaction and convenience: The simple fact is, shoppers want the BOPIS option. A recent study shows that 62% of respondents now expect retailers to offer curbside pickup from now on, and 71% expect buy-online-pickup-in-store (BOPIS) to be permanently available. Whether on a break from work or sitting on their couch at home on their tablet, shoppers spend a great deal of time browsing and researching a product online before they go into a store to purchase. And for those looking to have their purchase in their hands today, BOPIS is a great option. It not only offers shoppers more flexibility, but it helps maintain relationships with their favorite retailers.

Faster fulfillment: BOPIS offers quicker and often less complex order fulfillment, presuming the retailer has the necessary underlying technology in place. When a customer selects BOPIS, they gain greater control over when and where the product is picked up/delivered, in most cases allowing for immediate pick up. That beats the average 2-3 day delivery time for most online orders.

Eliminate shipping costs: BOPIS allows customers and retailers to eliminate shipping costs for an order. For some customers, eliminating the shipping costs is a driver to purchase via BOPIS. For retailers, BOPIS reduces time, cost and manual labor in picking, packing and shipping an order. When consumers pick up in store, companies get savings on last mile delivery, packaging and overall logistics costs. That last mile from the distribution center or warehouse often can be one of a retailer’s bigger costs.

Reduced, agile inventory: With the proper underlying order management or inventory visibility solution, BOPIS orders can be fulfilled from multiple channels, commonly from either a distribution center or retail location(s). With real-time inventory visibility, retailers can determine how best to fill the order and draw upon different inventory locations for fulfillment and replenishment, allowing them to lower overall inventory carrying costs.

Reduce costs, increase profits: BOPIS reduces shipping and fulfillment costs, as well as inventory carrying costs. There is strong evidence it also reduces returns, which saves processing costs and increases purchases – all contributing to overall increased profitability. Another revenue consideration: 60% of online carts are abandoned because of unexpected fees, primarily shipping fees. BOPIS can convert frustrated online browsers into buyers.

What are BOPIS best practices?

Customer satisfaction: Perhaps the most important best practice when employing a BOPIS capability and strategy is to ensure a positive customer experience. This requires  a smooth online experience that provides the customer with clear instructions about how to pick up in store. BOPIS is a feature that consumers desire and appreciate – don’t inhibit the experience. Make it easy and rewarding for the shopper.

In-store experience: Prioritize exceptional in-store experiences when executing your BOPIS strategy. In-store pickup should be as simple and quick as possible. As mentioned earlier, up to 60% of shoppers make additional purchases when picking up in store. But, don’t give in to the enticement of trying to force BOPIS into an in-store selling opportunity. Instead, create a dedicated, visible space for in-store pick up – and don’t place the BOPIS pickup location at the back of the store. Best practices for BOPIS are to put the pickup location at the front of the store, as well as potentially offering pickup lockers. Finally, be sure to train store associates in BOPIS protocols.

Customer confidence: Consumers want choices and know the options available in the market – whether it’s standard shipping, same-day delivery, in-store shopping or curbside. Make your available fulfillment options clear to them throughout the buying journey. For a successful omnichannel strategy, you must meet customers when and where they want to buy.

Real-time inventory visibility: A successful BOPIS strategy requires real-time visibility into inventory across your entire network. Without such visibility, BOPIS can go wrong quickly. For instance, a customer might order online to pick up in store, only to find when they arrive that the item was sold to another customer and the store is now out of stock. Real-time inventory visibility not only eliminates this problem, but it also provides you flexibility to move and replenish inventory from different channels to support BOPIS and sales in general.

Modern technology: Real-time inventory visibility can be achieved with a standalone solution to enable BOPIS and drive business value. However, to be best-in-class in omnichannel retail, businesses should have modern order management technology combined with an inventory visibility solution  to streamline the entire order management process and information flow. A modern, single platform for order management can accelerate transformation by simplifying technology and implementation complexity to deliver omnichannel order fulfillment capabilities from BOPIS to curbside pickup and ship from store (SFS).

What are the challenges associated with BOPIS?

As with any undertaking, there are challenges with BOPIS, as well. The most notable challenges relate to ensuring inventory accuracy and availability, which requires real-time visibility. For many retailers, this is a hurdle to overcome if their supporting technology is dated or siloed. An order management and inventory visibility solution goes a long way in eliminating hurdles and establishing a platform to accelerate BOPIS and other retail strategies.

There are in-store challenges to overcome, too. Dedicating prime, visible and easily accessible space for BOPIS pickup shoppers, and ensuring associates are trained in BOPIS protocols, are both essential for success.

Auburn University has developed a scorecard to evaluate retailers on their execution of omnichannel and BOPIS strategies and outcomes. This can be a helpful tool to fully understand the capabilities required and benchmarks against which to measure BOPIS performance.

It’s important that retailers understand and overcome these challenges because BOPIS is here to stay. BOPIS is not only good for business, but for customer satisfaction. More than 43% of the top 500 retailers are now employing a BOPIS strategy – and 56% of consumers report that they use BOPIS, a number that surged during COVID-19 pandemic.

Technologies that enable retail and BOPIS strategies

Having the right underlying technology is key to successful execution of BOPIS, particularly in ensuring the right product is in the right place at the right time to satisfy a customer’s fulfillment preference. One of the most essential solutions is real-time inventory visibility, complemented by a modern order management solution. A comprehensive order management platform, further enhanced with Artificial Intelligence (AI) capabilities, will provide added advantages that optimize the overall omnichannel experience. Working with many of the world’s leading retailers, IBM offers a portfolio of solutions to help modernize and extend retail strategy and capabilities:

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Preparation, IBM Learning, IBM Guides, IBM Career

IBM Sterling Inventory Visibility provides a real-time, accurate view of your inventory and demand across warehouses, distribution centers, including what’s in-store and in-transit, allowing you to protect margins, increase customer satisfaction and grow sales. The cloud-based solution leverages your existing systems and data sources to provide a single source of inventory visibility and analysis, processing and updating inventory information at extremely high speed and volumes.

IBM Sterling Order Management accelerates transformation by simplifying technology and implementation complexity to deliver omnichannel order fulfillment capabilities such as curbside pickup, BOPIS and SFS. Empower your business to maximize results by managing business rules that are right for your customers and your business. With real-time inventory management, you can appropriate stock for demand and manage inventory turns. The solution provides an intuitive interface with easy-to-use functionality, so you don’t have to rely on help from IT support staff. Use configurable features for order capture, including real-time inventory through to fulfillment, to power customer experiences that grow sales while improving profitability.

IBM Sterling Fulfillment Optimizer with Watson is an AI-enabled fulfillment analytics solution that elevates existing order management and inventory visibility systems to provide deeper fulfillment intelligence and capabilities to assist you in understanding and evaluating the factors impacting fulfillment performance. The solution, with its personalized dashboard, breaks down data silos to help you monitor developments and trends in demand, inventory and fulfillment. With Sterling Fulfillment Optimizer, retailers are better able to understand and act on changes in their channels and the market as they occur – and take actions to protect margins, utilize capacity, meet delivery commitments and exceed customer expectations. The solution can dramatically improve productivity and increase profits, especially during peak periods.

Source: ibm.com

Thursday 22 July 2021

Implement a zero trust strategy for your file transfers

IBM Exam Prep, IBM Tutorial and Material, IBM Career, IBM Prep, IBM Certification

The recent Kaseya ransomware attack is yet another reminder of the voracity of the war cybercriminals are waging on the business world. In 2020, scan-and-exploit became the top initial attack vector for surveyed organizations, surpassing phishing, according to the 2021 IBM X-Force Threat Intelligence Index. The report goes on to note that manufacturing was the second-most attacked industry in 2020 for respondents, up from eighth place the year prior, and second only to financial services. 

What’s behind these attacks?

Companies have invested a great deal in building castle-and-moat protections against external threats, focusing on protecting the DMZ or perimeter zone. In a world of known threats and less sophisticated techniques, this protection model worked reasonably well. But times have changed. 

Cybercriminals can be well resourced and tenacious and even backed by nation-states. They can leverage ever more sophisticated tools, such as Ransomware-as-a-Service, and can be incentivized by cryptocurrencies with their strong liquidity and poor traceability. As a result, they are well positioned in the arms race against traditional perimeter defenses. Clearly, it is time to consider a zero trust approach to help protect your most valuable resource—your data.

The rise of zero trust 

The problem with the castle-and-moat model is the primary focus on external defenses. Once inside, cybercriminals can generally move freely around without much impediment and wreak havoc. This has led to a broadening of the security perspective to encompass internal security, with what is termed the zero trust model. 

The Biden administration in the United States, recently issued an Executive Order calling for advancement towards a zero trust model within the federal government and among federal contractors. Subsequently, in response to multiple high-profile ransomware attacks, the White House also issued a memo to business executives urging them to protect against the threat of the ransomware. Such a model is an “evolving set” of concepts that move beyond “defenses from static, network-based perimeters” according to the National Institute of Standard and Technology (NIST).  

When a cybercriminal or organization has breached a perimeter and has access to your secure environment, typically they will start a stealth scan to build a map of your network. They will enumerate the server they are on for all its credentials and then will try those credentials on your other servers to travel laterally. Most breaches move from computer to computer over standard protocols such as SSH, FTP, SFTP, HTTP, and HTTPS. This means you need to have a strategy for restricting the spread or movement within your organization.  

Zero trust to protect your file transfers 

At IBM, our Sterling Secure File Transfer (SFT) solution is designed to align with a zero trust approach and harden servers to help reduce the possibility for ransomware or malware to travel laterally. The aim is to protect the inside of the castle – or inside the DMZ – to help safeguard internal intellectual property and assets. A zero trust approach requires securing and regulating movement between internal computers and servers and we begin by removing untrusted protocols.  

IBM Exam Prep, IBM Tutorial and Material, IBM Career, IBM Prep, IBM Certification
Our SFT solution is designed to include IBM Sterling Connect:Direct which uses a security-hardened protocol. When malware reaches out internally, it will not know how to ‘talk’ to the protocol. It can also check the IP address from the server that has requested access, and if that IP address is not on the internal list of trusted servers, which can be consistently updated, the receiving server automatically drops the session.  

In addition to these two internal security checkpoints, Connect:Direct can have additional checkpoints to further help prevent the spread of malware to another server. The malware also needs the correct credentials, which can be increased for additional protection of high-value servers, and only files with a specified name may be transferred.  

Each server that uses Connect:Direct becomes a checkpoint – and choke point – for malware. This zero trust approach in Connect:Direct hardens infrastructure and includes capabilities for zero trust practices for communications that can help mitigate risks of traditional protocols using FTP, SFTP and SSH. SFT can also encrypt data at rest and in transit, and provides multifactor authentication helping implement a zero trust strategy for your file transfers. 

So, if you have a traditional castle-and-moat security model, I urge you to consider implementing or expanding your zero trust strategy to help protect what is most valuable inside of your organization. You can start small and add more protections over time. The key is to begin now because the war will continue to escalate.

Source: ibm.com

Tuesday 20 July 2021

How to choose the right IT consumption model for your storage

IBM Exam Study, IBM Preparation, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Career, IBM Learning, IBM

Evolving business models and IT strategies are quickly changing the way businesses consume and pay for their data storage. The adoption of hybrid cloud has spurred growing demand for consumption-based IT as an alternative to traditional cash purchases and leases.

In response, many vendors are offering flexible consumption or “pay-per-use” pricing models and subscriptions that bring cloud-like economics to the data center and hybrid cloud. By 2026, the global Storage as a Service market is projected to reach USD 100.21 billion – up from USD 11.63 billion in 2018 – according to Verified Market Research.

With so many deployment types and IT consumption models suddenly available, it can be difficult to know which one is right for your storage strategy. In this blog post, we’ll outline the main types so that you can make an informed investment decision.

What is consumption-based pricing?

Consumption-based pricing refers to products and services with usage-based billing, meaning you pay only for the capacity you need as you consume it. These models can help you save money by replacing high upfront capital expenses with predictable quarterly charges aligned directly with your changing business needs. The idea is that you can quickly scale by consuming storage capacity instantly, provisioning the resources up or down as needed. Many variations exist, and most programs have a minimum term and/or capacity requirement. 

Types of deployment and IT consumption models

Many vendors today offer choices in storage consumption models and financing. Having this flexibility of choice will help you to modernize and scale your workloads for future business needs. Common deployment and IT consumption models for storage include:

◉ Traditional purchase model. Most organizations continue to keep select workloads on premises to meet security, compliance and performance requirements. On-premises infrastructure, such as storage, has traditionally been an upfront or leased capital expense in which you purchase infrastructure that is deployed in your data center and will meet your maximum capacity requirements. But budgeting for on-premises infrastructure can be tricky — needs can be difficult to predict, and because procurement cycles for new storage systems can be lengthy, most organizations overprovision their on-premises storage.

◉ Consumption-based (“pay-per-use”) model for on-premises storage. In these models the vendor provides you with storage systems as defined by your requirements with 25% to 200% (level varies greatly by vendor) more “growth” capacity than your immediate needs. You buy, lease or rent a committed level of “base capacity” that equates to your immediate needs, and you then pay for what you use, when you use it, above that level. These models allow you to scale capacity use up or down as your business needs dictate. They usually have terms from 3 to 5 years.

◉ Subscription-based, or Storage as a Service. Like the consumption-based model above, these models have base commitments and pay-for-use above the base commitment level. The big difference is that Storage as a Service (STaaS) is a service offering much like cloud-based services. STaaS provides fast, on-demand capacity in your data center. You pay only for what you use, and the vendor takes care of the lifecycle management (deployment, maintenance, growth, refresh and disposal). The offering will be based on a set of service level descriptions indicating things such as levels of performance and availability with no direct tie to specific technology models and configuration details. The infrastructure still physically resides in your data center, or in a co-location center, but you don’t own it, install it, or maintain it. In addition, you don’t have to worry about procurement cycles for adding capacity or technology refreshes. You gain cloud economics with an OPEX pricing model, combined with the security of on-premises storage and lower management overhead.

◉ Cloud-only approach. Cloud services are readily scalable and can be easily adjusted to meet changing workload demands. Deploying in the public cloud can reduce spending on hardware and on-premises infrastructure because it is also a pay-per-use model. In the perfect utility-based model, you would pay only for what you use, with guaranteed service levels, set pricing and predictable quarterly charges aligned directly with your business needs. Of course, many of today’s clouds do not meet that standard. In addition to charging for the amount of capacity consumed, some cloud storage providers also include charges for the number of accesses and for amount of data transferred out of the cloud, referred to as “egress.”

◉ Hybrid approach. A hybrid approach to storage would integrate a mix of services from public cloud, private cloud and on-premises infrastructure, with orchestration, management and application portability delivered across all three using software-defined management. It can also include consumption-based pricing and subscription-based services for on-premises storage.

Benefits of a flexible consumption model for storage

IBM Exam Study, IBM Preparation, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Career, IBM Learning, IBM
Now that you know the common types of deployment and IT consumption models, let’s explore a few reasons why consumption-based models are increasingly popular. Benefits of consumption-based pricing for storage include:

◉ Cloud economics – move from CAPEX to OPEX. In a time of shrinking IT budgets, consumption-based pricing allows you to reduce capital spending with predictable monthly OPEX charges, and you pay only for what you use.

◉ Align IT resources and usage. With monitoring included, you’ll be able to understand and more accurately predict your capacity usage for more cost-efficient operations. This means no more overprovisioning or running out of capacity, and instead, you can align spending more closely to the needs of the business.

◉ Gain agility. With consumption-based IT you have extra capacity to provision almost instantly to meet changing business needs. No more delays due to long procurement and vendor price negotiation cycles. You get a cloud-like experience.

◉ Reduce IT complexity. The vendor assumes storage life-cycle management responsibilities, which means your admin staff can focus on higher value tasks. And with consistent data services on-premises and in the cloud, you can improve availability and avoid costly downtime, all while reducing overhead.

◉ Access to the latest innovations in technology. As-a-service models give organizations access to leading storage technology with enterprise-class features for superior performance, high availability and scalability.

Source: ibm.com

Thursday 15 July 2021

NanoDx licenses IBM CMOS-compatible nanosensors for rapid COVID-19 testing tech

Rapid but accurate low-cost testing could play an essential role in containing pandemics.

IBM Tutorial and Material, IBM Exam Prep, IBM Prep, IBM Career, IBM Certification, IBM Guides, IBM Learning

If the COVID-19 pandemic has taught us anything, it’s that we need to be prepared for the next global health crisis.

With that in mind, our teams at IBM Research and medical device company NanoDxTM are announcing that NanoDx is licensing IBM’s nanoscale sensor technology for use in NanoDx’s diagnostic platform. IBM’s technology was developed with the goal of advancing sensor CMOS technology.


NanoDx plans to use this sensor technology for diagnostic platforms designed to provide rapid, accurate and inexpensive detection of different diseases. NanoDx also plans to use this technology to advance efforts to diagnose a variety of medical conditions rapidly and accurately, including COVID-19, different forms of influenza, traumatic brain injury (TBI), sepsis and stroke in the field of in vitro diagnostics, as well as biosensors.

The IBM-designed nano biosensors are metal-oxide semi-conductive (CMOS)-compatible, which means they may be more cost-effectively and rapidly manufactured in high-volume. When integrated with automation circuitry, these tiny sensors may enable NanoDx’s real-time, point-of-care diagnostics technology to detect and quantify biomarkers from small fluid specimens in less than two minutes. This collaboration is significant because it offers a healthcare use case for IBM’s CMOS hardware technology.

IBM Tutorial and Material, IBM Exam Prep, IBM Prep, IBM Career, IBM Certification, IBM Guides, IBM Learning
Figure 1:
NanoDx handheld with Covid Cartridge.

Pandemic pivot

Prior to the pandemic, NanoDx had been working on a proprietary diagnostic platform that the company believed could rapidly detect early signs of TBI. Their sensing platform was designed to detect the presence of biomarkers in a small fluid specimen by monitoring changes in the sensing current signal.

With the advent of COVID-19, NanoDx needed to find a way to mass produce the tens of millions of units needed to satisfy projected demand, as well as manage more complex diseases that require more complicated multiplexing capabilities. Fortunately, NanoDx had held preliminary discussions with IBM Research several years ago with the objective of incorporating our technology into its existing process and recognized the potential of our technology.

Concurrently, our IBM Research team independently recognized what we believed to be drawbacks in existing diagnostic technology and had been working on developing enhanced biosensors that addresses these issues. Leveraging our expertise in device physics, materials and CMOS technology, our team was able to create biosensors with significantly enhanced sensing characteristics that could potentially be produced at extremely high production volumes with expanded multiplexing at a lower cost.

Furthermore, these sensors could be cost effectively mass produced in a CMOS foundry with existing tooling. The details of this research are published in “Silicon Nanowire Field Effect Transistor Sensors with Minimal Sensor-to-Sensor Variations and Enhanced Sensing Characteristics.”

NanoDx saw the publication and was interested in working with us in the field of in vitro diagnostics, as well as biosensors on their TBI detector. Soon after, when the COVID-19 pandemic hit, NanoDx approached IBM again. They wanted to modify their device for rapid COVID-19 testing and thought IBM Research’s nanoscale sensor technology could help them address this new challenge with the ability to cost-effectively scale to significant production volumes.

NanoDx’s goal is to create accurate, rapid and low-cost handheld diagnostic devices that would be available to consumers for at-home testing. 

Despite the strides made in containing the COVID-19 pandemic, there is still room for more accurate testing, as well as easily manufacturable devices that could be used to get ahead of any future pandemics. Through NanoDx, IBM Research’s innovative nanoscale technology may now play a crucial part in rapid tests for COVID-19 and other health conditions.

Source: ibm.com

Tuesday 13 July 2021

Data resilience and storage — a primer for your business

Data resilience and storage, IBM Learning, IBM Tutorial and Material, IBM Learning, IBM Exam Prep, IBM Preparation, IBM Career

Data resilience has become increasingly vital to modern businesses. Your ability to protect against and recover from malicious attacks and other outages greatly contributes to your business success. Resilient primary storage is a core component of data resilience, but what is it exactly?

Read on to get answers to important questions about data resilience and to see how resilient primary storage for your data can help your business thrive.

What is data resilience?

Data resilience is the ability to protect against and recover quickly from a data-destructive event, such as a cyberattack, data theft, disaster, failure or human error. It’s an important component of your organization’s overall cyber resilience strategy and business continuity plan.

Keeping your data — and your entire IT infrastructure — safe in the event of cyberattack is crucial. A 2020 report by Enterprise Strategy Group found that 60% of enterprise organizations experienced ransomware attacks in the past year and 13% of those organizations experienced daily attacks. Each data breach, according to the Ponemon Institute, can cost an average of  USD 3.86 million. By 2025, cybercrime costs are estimated to reach USD 10.5 trillion annually, according to Cybersecurity Ventures.

In addition to combating malicious attacks, data resilience is vital to preventing data loss and helping you recover from natural disasters and unplanned failures. Extreme weather events such as floods, storms and wildfires are increasing in number and severity, and affect millions of people and businesses all over the world each year. In 2018, the global economic stress and damage from natural disasters totaled USD 165 billion, according to the World Economic Forum in their 2020 Global Risks Report.

While the first order of business is to prevent data-destructive events from occurring, it’s equally important to be able to recover when the inevitable happens and an event, malicious or otherwise, takes place.

Your preparedness and ability to quickly respond hinges on where you are storing your primary data. Is the solution resilient? Ensuring your data stays available to your applications is the primary function of storage. So, what are the characteristics of resilient primary storage that can help?

5 characteristics of a resilient storage solution

A resilient storage solution provides flexibility and helps you leverage your infrastructure vendors and locations to create operational resiliency – achieving data resilience in the data center and across virtualized, containerized and hybrid cloud environments.

Data resilience and storage, IBM Learning, IBM Tutorial and Material, IBM Learning, IBM Exam Prep, IBM Preparation, IBM Career

Characteristics of resilient primary storage include:

1. 2-site and 3-site replication: capable of traditional 2-site and 3-site replication configurations – on premises, on cloud, or hybrid – using your choice of synchronous or asynchronous data communication. This gives you confidence that your data can survive a localized disaster with very little or no data loss, also known as recovery point objective (RPO).

2. High availability: the ability to gain access to your data quickly, in some cases immediately, which is also known as recovery time objective (RTO). Resilient storage has options for immediate failover access to data at remote locations. Not only does your data survive a localized disaster, but your applications have immediate access to alternate copies as if nothing ever happened.

3. Enhanced high availability: multi-platform support. This means RPO/RTO options available regardless of your choice in primary storage hardware vendors or public cloud providers.

4. Immutable copy: making copies that are logically air-gapped from the primary data, and further making that copy unchangeable, or immutable, in the event your primary data copy becomes infected.

5. Encryption: protecting your data from bad actors and guarding against prying eyes or outright data theft.

How can I ensure my organization has data resilience?


Many organizations have a mix of different on-premises storage vendors or have acquired storage capacity over time, meaning they have different generations of storage systems. Throw in some cloud storage for a hybrid environment and you may find it quite difficult to deliver a consistent approach to data resilience.

A first step is modernizing the storage infrastructure you already have. Fortunately, this is not something that requires you wait for a lease to expire or for data growth to drive a new hardware purchase. You can get started right away with software-defined storage from IBM on your existing storage from most any vendor.

IBM FlashSystem® and IBM SAN Volume Controller, both built with IBM Spectrum Virtualize software, will include a Safeguarded Copy function that creates immutable (read-only) copies of your data to protect against ransomware and other threats. This functionality is also available on IBM Storage for mainframe systems.

Additionally, you can combine the data resilience capabilities of IBM FlashSystem and IBM Spectrum® Protect Plus to create a highly resilient IT infrastructure for on-premises, cloud and containerized environments. IBM Spectrum Protect Plus is available at a special rate when purchasing a FlashSystem 5000 or 5200.

Source: ibm.com

Saturday 10 July 2021

AI discovery of a new way to enter a cell could boost molecular design

ACS Nano cover story describe how AI-designed antimicrobial peptides interact with computational models of a cellular membrane.

More: C2090-011: IBM SPSS Statistics Level 1 v2

IBM Tutorial and Material, IBM Certification, IBM Leaning, IBM Preparation, IBM Career, IBM Prep

Serendipity.

Happy accidents have long helped scientists discover new materials. And it’s serendipitous insights from physical modelling and AI-boosted computer simulation that have led us to an unexpected finding that could create more opportunities for molecular design.

All forms of life rely on cellular membrane as a biological defense mechanism against bacterial and viral pathogens, and membrane disruption typically happens through circular pores. Having studied how molecules disrupt or permeate a cell’s membrane, our team of researchers from IBM, Oxford, Cambridge and the National Physical Laboratory have discovered a fundamentally new, extremely sensitive, mechanism of biological membrane disruption.

In our paper, “Switching Cytolytic Nanopores into Antimicrobial Fractal Ruptures by a Single Side Chain Mutation,” published in ACS Nano and selected for the cover (pictured), we describe how AI-designed antimicrobial peptides interact with computational models of a cellular membrane. We discovered that a carefully chosen but minimal chemical modification (a single mutation in the amino acid sequence) alters the nature of the interaction of the peptide with the membrane—both penetration depth and orientation. Amino acids are the body’s basic building blocks that form chains—peptides and proteins.

IBM Tutorial and Material, IBM Certification, IBM Leaning, IBM Preparation, IBM Career, IBM Prep
Figure 1:
June 2021 cover of ACS Nano featuring research paper ‘Switching Cytolytic Nanopores into Antimicrobial Fractal Ruptures by a Single Side Chain Mutation’ which describes how AI-designed antimicrobial peptides interact with computational models of a cellular membrane.

In lab-based experiments using cutting-edge imaging at the nanoscale, we found that this mutation has unexpectedly large effects. It switches the molecular mode of action from conventional circular pore formation to a previously unknown fractal rupture mode that still shows strong antimicrobial potency, or effectiveness. Fractals are self-similar patterns frequently occurring in nature—for example, in snowflakes and Romanesco broccoli—but until now haven’t been observed in cellular membranes, simulated or real.

Our results show that minimal chemical changes can have very large functional consequences. Understanding how to control these processes can help us tune material properties from the molecular scale upwards.

IBM Tutorial and Material, IBM Certification, IBM Leaning, IBM Preparation, IBM Career, IBM Prep
Figure 2:
Growth dynamics in fractal ruptures. In-liquid AFM imaging of SLBs (DLPC/DLPG, 3:1 molar ratio) treated with bienK peptides (0.3 μM peptide). Length and height scale bars are 1 μm and 10 nm, respectively.

Disrupting the membrane

Biological membranes are highly complex systems with many chemical components and processes occurring at various lengths and timescales. Researchers have long studied molecular mechanisms of different membrane disruption phenomena in peptides, proteins, antibiotics and nanoparticles, but so far haven’t found a general or unifying mechanism.

Intriguingly, we weren’t looking for such a mechanism at all.

"Instead, our main goal was to use a data-driven approach to computationally design small molecules able to permeate bacterial membranes with so-called selective functionality—meaning toxic for bacteria, but innocuous for humans."

Our assumption was that the successful candidates would behave as usual, disrupting the membrane by forming a circular pore on its surface.

Having designed a series of very small peptides, we ran the initial simulations. We found that the molecules seemed to function as potent antimicrobials, able to enter our simulated cell membranes. They assembled in the membrane to disrupt the its structure at the molecular level. But how?

The answer was in the amino acids themselves

We found that when we swapped one aminoacid (alanine) for another one (lysine), the mutation led to the membrane’s rupture—and a never-before seen stunning fractal pattern.

This work builds on IBM Research’s ongoing efforts to accelerate the pace at which new drugs and therapies can be discovered, tested and brought to market. Recently, IBM researchers also published work in Nature Biomedical Engineering demonstrating a generative AI system that can help to speed up the design of molecules for novel antibiotics, outperforming other design methods by nearly 10 percent. This model has already created two new, non-toxic antimicrobial peptides (AMP) with strong broad-spectrum potency.

But simulation isn’t life. Once synthetized in the lab, some of these molecules did indeed disrupt a real cell’s membrane and produced a fractal rupture design. Almost immediately, the anomalous fractal rupture pathway was converted into conventional, circular, pore formation.

The chemically cued instability mode we’ve discovered appears distinct from any known manifestations of membrane organization, phase separation, pore formation phenomena or other forms of segregation. While the mechanism has entirely new properties and new behavior, the biological relevance and exploitation opportunities of some of these findings are still being explored. Still, the discovery of a fundamentally new instability mode in a biological membrane has potentially wide implications for drug discovery and drug delivery.

Source: ibm.com

Thursday 8 July 2021

The IBM Quantum heavy hex lattice

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides

Overview


As of Aug 8, 2021, the topology of all active IBM Quantum devices will be based around the heavy-hex lattice. The heavy-hex lattice represents the fourth iteration of the topology for IBM Quantum systems and is the basis for the Falcon and Hummingbird quantum processor architectures. Each unit cell of the lattice consists of a hexagonal arrangement of qubits, with an additional qubit on each edge.

Read More: C9560-507: IBM Tivoli Monitoring V6.3 Implementation


The heavy-hex topology is a product of co-design between experiment, theory, and applications, that is scalable and offers reduced error-rates while affording the opportunity to explore error correcting codes. Based on lessons learned from earlier systems, the heavy-hex topology represents a slight reduction in qubit connectivity from previous generation systems, but, crucially, minimizes both qubit frequency collisions and spectator qubit errors that are detrimental to real-world quantum application performance.

In this tech report, we discuss the considerations needed when choosing the architecture for a quantum computer. Based on proven fidelity improvements and manufacturing scalability, we believe that the heavy hex lattice is superior to a square lattice in offering a clear path to quantum advantage, from enabling more accurate near-term experimentation to reaching the critical goal of demonstrating fault tolerant error correction. We demonstrate that the heavy-hex lattice is equivalent to the square lattice up to a constant overhead, and like other constant overheads such as choice of gate set, this cost is insignificant compared to the cost of mapping the problem itself.

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides
Figure 1:
Three unit cells of the heavy-hex lattice. Colors indicate the pattern of three distinct frequencies for control (dark blue) and two sets of target qubits (green and purple).

Physical motivations


IBM Quantum systems make use of fixed-frequency qubits, where the characteristic properties of the qubits are set at the time of fabrication. The two-qubit entangling gate in such systems is the cross-resonance (CR) gate, where the control qubit is driven at the target qubit’s resonance frequency. See Fig. 1 for the layout of control and target qubits in the heavy-hex lattice. These frequencies must be off-resonant with neighboring qubit transition frequencies to prevent undesired interactions call “frequency collisions.”

The larger the qubit connectivity, the more frequency conditions must be satisfied, and degeneracies amongst transition frequencies become more likely. In addition, due to fabrication imperfections, require disabling an edge in the system connectivity (e.g. see Penguin v1 and v2 in Fig. 2)—all of which can effect device performance and add hardware overhead.

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides
Figure 2:
Left to right, evolution of the topologies for IBM Quantum systems, including the average qubit connectivity.

A similar set of frequency collisions appears in flux-tunable qubits as avoided-crossings in implementing flux control. Moreover, tunable qubits come at the cost of introducing flux noise which will reduce coherence, and the flux control adds scaling challenges to larger architectures with increased operational complexity in qubit tune-up and decreased gate fidelity caused by pulse distortions along the flux line.

As shown in Fig. 3, the decrease in qubit connectivity offered by the heavy-hex lattice, as well as the selected pattern of control and target qubit frequencies, gives an order of magnitude increase in zero-frequency collision yield as compared to other choices for system topology.

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides
Figure 3:
Simulations of system yields for collision free devices for heavy-hex and square topologies as a function of qubit frequency variability.

The sparsity of the heavy hex topology with fixed frequency qubits also improves overall gate fidelity by limiting spectator qubit errors: errors generated by qubits that are not directly participating in a given two-qubit gate operation. These errors can degrade system performance and do not present themselves when the gate is performed in isolation; one- and two-qubit benchmarking techniques are not sensitive to these errors.

However, the spectator errors matter severely when we run circuits. The rate of spectator errors is directly related to the system connectivity. The heavy-hex connectivity reduces the occurrence of these spectator errors by placing the control qubit on only those edges connected to the target qubits (Figure 1).

Figure 4 shows the average CNOT error rates for four generations of Penguin quantum processor along with those of the Falcon and Hummingbird families that utilize the heavy-hex topology. The reduction in frequency collisions and spectator errors allow for devices to have better than 1 percent average CNOT error rate across the device, and isolated two-qubit gates approaching 0.5 percent. Additional techniques for improving spectator errors are given in Ref. This represents a factor-of-three decrease compared to the error rates on the best Penguin device with a square layout.

Higher Quantum Volume, higher computing performance


Quantum Volume (QV) is a holistic, hardware-agnostic quantum system benchmark that encapsulates system properties such as the number of qubits, connectivity, as well as gate, spectator errors, and measurement errors into a single numerical value by finding the largest square circuit that a quantum device can reliably calculate.

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides
Figure 4:
Average CNOT gate error rates for Penguin-, Falcon-, and Hummingbird-based IBM Quantum systems.

Higher quantum volumes directly equate to higher processor performance. Gate errors measured by single- or two-qubit benchmarking do not reveal all errors in a circuit, for example crosstalk and spectator errors, and estimating circuit errors from the gate errors is non-trivial. In contrast, QV readily incorporates all possible sources of noise in a system, and measures how good the system is at implementing average quantum circuits. This allows one to find the best system to run their application.

Figure 5 shows the evolution of Quantum Volume over IBM Quantum systems, demonstrating that only heavy-hex based Falcon and Hummingbird systems can achieve QV32 or higher. Parallel improvements in gate design, qubit readout, and control software, such as those in Ref., also play an important role in increasing QV values faster than the anticipated yearly doubling.

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides
Figure 5:
Quantum Volume as a function of system release date for IBM Quantum Penguin (20 qubits), Falcon (27 qubits), and Hummingbird (65 qubits) systems. The Falcon and Hummingbird are based on the heavy-hex topology.

Development of quantum error correcting codes is one of the primary areas of research as gate errors begin to approach fault-tolerant thresholds. The surface code, implemented on a square grid topology, is one such example of this. However as already discussed, and experimentally verified, frequency collisions are common in fixed-frequency qubit systems with square planar layouts. As such, researchers at IBM Quantum developed a new family of hybrid surface and Bacon-Shor subsystem codes that are naturally implemented on the heavy-hex lattice.

Similar to the surface code, the heavy-hex code also requires a four-body syndrome measurement. However, the heavy-hex code reduces the connectivity by implementing a degree four node with two degree three nodes as presented in Fig. 6.

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides
Figure 6:
Reduction of a degree four node into two degree three nodes compatible with the heavy-hex lattice.

Mapping heavy-hex to square lattices


The connectivity of other lattices, such as the square lattice, can be simulated on the heavy-hex lattice with constant overhead by introducing swap operations within a suitably chosen unit cell. The vertices of the desired virtual lattice can be associated to subsets of vertices in the heavy-hex lattice such that nearest-neighbor gates in the virtual lattice can be simulated with additional two-qubit gates.

Taking the square lattice as an example, there are a variety of ways to associate a unit cell of the heavy-hex lattice to the square lattice. If we draw the hexagons as 3x5 rectangles, one natural choice places the qubits of the square lattice on the horizontal edges of the heavy-hex lattice, see Figure 7.

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides
Figure 7:
Direct mapping between heavy-hex and square lattices.

Let's choose the goal of applying an arbitrary two-qubit gate, U(4), between each neighboring qubit in the virtual lattice. This can be accomplished with a constant overhead in depth 14, of which eight steps involve only swap gates. Each qubit individually participates in six swap gates. Alternatively, these swaps might be replaced by teleported gates, at potentially lower constant cost, if the desired interactions correspond to Clifford gates.

Other mappings exist as well and expose new possibilities for tradeoffs and optimizations. For example, an interesting alternative mapping encodes the qubits of the square lattice into 3-qubit repetition codes on the left and right edges of each 3x5 rectangle (Figure 8, Left). This creates an effective heavy-square lattice where encoded qubits are separated by single auxiliary qubit (Figure 8, Right). In this encoding we can apply diagonal interactions in parallel along the vertical or horizontal direction of the heavy-square lattice. Since swaps occur in parallel between these two rounds of interactions, the total depth is only two rounds of swaps and two rounds of diagonal gates.

There are relatively simple circuits for applying single-qubit gates to the repetition code qubits whose cost is roughly equivalent to a swap gate. Since none of these operations is necessarily fault-tolerant, the error rate will increase by as much as a factor of three, but post-selection can be done for phase flip errors while one takes advantage of the gains from fact that the code itself corrects a single bit flip error. As mentioned, the cost of the encodings described above is a constant and is thus on equal footing with other constant overheads such as the choice of gate set used. These should be compared with the cost of mapping the problem itself to the quantum computer, which might have a polynomial overhead.

IBM Quantum Hardware, IBM Tutorial and Material, IBM Career, IBM Preparation, IBM Certification, IBM Learning, IBM Prep, IBM Guides
Figure 8:
Encoding into the 3-qubits repetition code (left) leads to a logical heavy square lattice (right).

Source: ibm.com