Tuesday 27 April 2021

It is time for serverless computing on IBM Z and IBM LinuxONE

IBM Z, IBM Exam Prep, IBM Learning, IBM Certification, IBM Preparation

If you have been around the software industry long enough, you have seen architectural changes and new computing paradigms from time to time. From monolithic to client-server architectures, and from web applications to containerized cloud-native applications, we see a constant evolution with more software available thanks to the proliferation of open source software.  

Serverless computing advances

Driven by open source technologies, serverless computing is a computing paradigm that has been around for a few years, and all public cloud vendors, including IBM Cloud, offer the service. Serverless computing typically refers to the approach of building and running applications hosted by a third party, but unlike with cloud computing, you do not manage the infrastructure. 

The hosting of serverless applications is only part of this new computing paradigm. The most important aspect is the model of breaking up applications into individual functions that can be individually invoked and scaled. This more finely grained development and deployment model allows for applications to have one or many functions that can be executed and scaled up or down on demand. 

Introducing Red Hat OpenShift Serverless

Today we are announcing that you can now build and deploy serverless computing on IBM Z and IBM LinuxONE with Red Hat OpenShift Serverless. This new offering is available as a no-charge add-on to the Red Hat OpenShift Container Platform.  

One of the most complete open source projects for serverless computing is Knative, a platform that provides components to build and run serverless container-based applications on Kubernetes. Yes, you can create container-based applications broken into functions to run on stateless containers orchestrated by Kubernetes. 

Red Hat OpenShift Serverless is based on the Knative project and runs on an OpenShift cluster. Capabilities include Knative Serving, Knative Eventing, and Knative CLI. You can deploy serverless applications in practically any programming language and enable auto-scaling to scale up to meet demand or scale down to zero when not in use. You can invoke serverless functions using plain HTTP requests following CloudEvents specification. You can also trigger serverless containers from a growing number of event sources, and it comes with out-of-the-box project templates to jumpstart your code. 

As is customary, software developers from IBM and Red Hat contribute upstream to open-source projects including all Knative components, and they continue to enhance functionality and ensure everything works smoothly in the s390x architecture for IBM Z and LinuxONE

This is a significant new capability for all those Linux on IBM Z and LinuxONE deployments in the largest and most important enterprises in the world. You now have the option to migrate applications or to build new applications based on individual functions while combining both containers and serverless functionality.   

IBM Z, IBM Exam Prep, IBM Learning, IBM Certification, IBM Preparation

Serverless computing is well-suited for parallel processing, event-driven, streams and message queues. Most applications with large volumes of transactions, including AI-related ones such as Monte Carlo simulations, database updates and event processing of data with small payloads (such as the IoT) are ideal for serverless computing. Moreover, unlike function-as-a-service offerings, with your Z or LinuxONE, you do not trade control and visibility of your infrastructure when you are doing “serverless.” 

To summarize, you now have the opportunity to develop applications based on functions with discrete units of code for event-based execution. This will bring development velocity and rapid benefits to the business. 

Try a new computing paradigm on IBM Z and LinuxONE and stay tuned for more OpenShift add-ons, support for more event types, and more great technology powered by open-source innovation available to you in integrated and easy-to-deploy supported packages.

Source: ibm.com

Sunday 25 April 2021

Daily chats with AI could help spot early signs of Alzheimer’s

IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Preparation, IBM Career

There’s no cure for Alzheimer’s disease. But the earlier it’s diagnosed, the more chances there are to delay its progression.

Our joint team of researchers from IBM and University of Tsukuba has developed an AI model that could help detect the onset of the mild cognitive impairment (MCI), the transitional stage between normal aging and dementia — by asking older people typical daily questions. In a new paper published in Frontiers in Digital Health journal, we present the first empirical evidence of tablet-based automatic assessments of patients using speech analysis — successfully detecting mild cognitive impairment (MCI), the transitional stage between normal aging and dementia.

Unlike previous studies, our AI-based model uses speech responses to daily life questions using a smartphone or a tablet app. Such questions could be as simple as inquiring someone about their mood, plans for the day, physical condition or yesterday’s dinner. Earlier studies mostly focused on analyzing speech responses during cognitive tests, such as asking a patient to “count down from 925 by threes” or “describe this picture in as much detail as possible.”

We found that the detection accuracy of our tests based on answers to simple daily life questions data was comparable to the results of cognitive tests — detecting MCI signs with an accuracy of nearly 90 percent. This means that such an AI could be embedded into smart speakers or similar type of commercially available smart home technology for health monitoring, to help detect early changes in cognitive health through daily usage.

Our results are particularly promising because conducting cognitive tests is much more burdensome for participants. It forces them to follow complicated instructions and often induces a heavy cognitive load, preventing frequent assessments for timely and early detection of Alzheimer’s. Relying on more casual speech data though could allow much more frequent assessments with less operational and cognitive costs.

For our analysis, we first collected speech responses from 76 Japanese seniors — including people with MCI. We then analyzed multiple types of speech features, such as pitch and how often people would pause when they talked.

We knew that capturing subtle cognitive differences based on speech in casual conversations with low cognitive load would be tricky. The differences between MCI and healthy people for each speech feature tend to be smaller than those for responses to cognitive tests.

IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Preparation, IBM Career
We overcame this challenge by combined use of responses to multiple questions designed to capture changes in memory and executive functions, in addition to language function, associated with MCI and dementia. For example, the AI-based app would ask: “What did you eat for dinner yesterday?” A senior with MCI could respond: “I had Japanese noodles with tempura — tempura of shrimps, radish, and mushroom.”

It may seem that there is no problem with this response. But the AI could capture differences in paralinguistic features such as pitch, pause, and others related to acoustic characteristics of voice. We discovered that compared to cognitive tests, daily life questions could elicit weaker but statistically discernible differences in speech features associated with MCI. Our AI managed to detect MCI with high accuracy of 86.4 percent, statistically comparable to the model using responses to cognitive tests.

Source: ibm.com

Thursday 22 April 2021

5 ways modern B2B infrastructure fuels efficiencies in the energy and utilities sector

IBM Exam Prep, IBM Learning, IBM Guides, IBM Career, IBM Preparation

A significant portion of the population is now working from home. That trend will likely continue, as 83% of companies surveyed expect hybrid workplaces to become the norm. When power outages occur due to extreme hot or cold weather as recently experienced in the U.S., shortages of water, food and heat are inevitable. Even worse, these events are to blame for lost lives and billions of dollars in costs. And now, with more remote staff and few with home generators, an outage can hamper business productivity as never before. In these extreme situations it is imperative that IT infrastructure supporting the exchange of capacity and usage information remain highly available without disruption.

There are many reasons why energy and utility companies may experience downtime. In our work with energy and utility clients worldwide, we see five areas IT leaders and B2B managers are focusing on to build resilience and help mitigate risk of service disruptions.

Let’s explore:

1. Modernization

If you have a legacy or homegrown system or multiple, point solutions that aren’t integrated to enable full interoperability, it’s time to modernize your B2B infrastructure. As employees retire, many B2B managers and IT leaders are having trouble replacing the traditional skill sets and tribal knowledge required to operate these complex systems. Digital B2B networks can simplify and automate connectivity with trading partners and provide additional visibility to help you understand capacity, determine if you can meet demand requests, and better allocate supply. Take a fresh look at today’s modern, flexible solutions and how they can be customized to meet your needs and evolve with you. A comprehensive solution that includes all the capabilities to support your environment is simpler to use and lowers the total cost of ownership.

41% faster partner onboarding and 51% more efficient management of B2B transactions

– Business Value Highlights, IDC report

2. Availability

As part of modernization, consider moving to the cloud for availability and resilience. B2B infrastructure that’s available as a cloud or hybrid solution, provides flexibility to auto-scale.  As demand shifts, you can scale up or down to meet changing transaction volumes and manage costs. With options for EDI and API connectivity, you can work seamlessly with both large and small trading partners and establish secure, repeatable workflows for data movement internally and externally.

“Keep systems operational, essentially 24/7.”

– Eric Doty, Greenworks Tools

3. Compliance

Wholesale and retail energy firms need a way to reliably and securely exchange, track and reconcile transactions between producers, operators, pipelines, services and marketers. B2B solutions need to be optimized for the energy marketplace and support standards, including North American Energy Standards Board (NAESB) compliance, which is a backbone data exchange standard for all energy trading markets in North America. Particularly important during times of disruption, regulatory adherence simplifies and accelerates the ability to source power from other regions that are generating power at a surplus, or supply other regions when you can fill the void.

4. Security

The average cost of a data breach in the energy sector rose 14.1% between 2019- 2020, to an average of $6.39 million—the largest increase and second highest average cost among all sectors surveyed. Modern B2B infrastructure ensures secure and reliable information exchange between key customers and suppliers with tools for user- and role-based access. Using the latest standards and encryption policies protects data of all types, at rest and in motion—customer, financial, order, supply, Internet of Things (IoT), regulatory reporting and more.

5. Innovation

NAESB puts blockchain at the top of its list of significant technologies that will reshape the energy industry. Deemed a “high-value area,” NAESB will remain active in the development of supportive standards and continue involvement as the energy industry continues to adopt applications for the technology. For example, the industry forum is looking at digital smart contracts to improve and automate transactions and accounting cycles for power trading. Prepare for the future with a modern, hosted service that can add capabilities like blockchain, which provides multi-party visibility of a permissioned, immutable shared records that fuel trust across the value chain. Evolve in lock step with the energy trading market for competitive advantage.

IBM Exam Prep, IBM Learning, IBM Guides, IBM Career, IBM Preparation

In our increasingly connected world, service disruptions shine a light on the need for greater resilience in the energy and utility sector. As you face the certainty of change in marketplaces, technology and industry standards, you need a fast, flexible and secure B2B infrastructure so you can continue to deliver exceptional service anywhere, anytime. Let IBM show you how we can simplify your B2B journey and extend the value of your existing investments while driving innovations that propel you forward.

Source: ibm.com

Tuesday 20 April 2021

Why B2B organizations need an advanced order and inventory management solution

IBM B2B, IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Preparation, IBM Career

Many B2B companies, including those in automotive, electronics, and manufacturing, are at a crossroads with their sales and inventory management processes. Historically, they’ve been reliant on legacy or ERP systems – tools that were designed for predictable, back office business processes. As B2B commerce becomes more digital, these organizations require a system that is more agile, scalable, and adaptable to meet customers’ changing needs.

“With our homegrown solution, there was no single record for deals or inventory information unless you went back and tried to reassemble things­­– which in many of our back offices was impossible.”

VP of worldwide business systems, motion controls technology

As supply chains either rapidly slowed down or accelerated in 2020, the lack of agility in these tools became abundantly clear. Manual processes are also making it more challenging to adapt in a time where disruptions are everywhere. By relying on manual processes, organizations are subject to increased human error, delay of goods, and higher costs to do business. On top of these challenges, B2B consumers are demanding a more B2C experience with seamless ordering and self-service, and manual processes can’t deliver.

An omnichannel order and inventory management solution empowers your business to maximize results by automating business rules that are right for your customers and your company, while supporting the agility and scalability requirements driven by today’s customer expectations.

Let’s explore the benefits in more detail:

1. Automate sales orders

With the help of the right B2B order management system, you can accept orders via multiple sales platforms, saving time as you easily manage your inventory, create a new order, update an order, and process the payment. Consolidate sales information from multiple channels so you can track your inventory and orders in a single platform. This gives you the flexibility needed to source and allocate orders more efficiently across your network.

2. Reduce manual work

An advanced order management solution processes large B2B orders at the line level — providing more flexibility around splitting orders and managing different workflows on the same order. Sourcing rules are much more extensive to support scenarios like order prioritization, substitutions, and customer level requirements.

3. Real-time global inventory

Get up-to-the-minute inventory tracking and accurate available-to-promise data with a global view of inventory across all your business units. Having one picture of inventory enables you to accurately sell omnichannel inventory, including in-transit, and allows your customers to place one consolidated order with a single invoice. Reduce over-promising by identifying exactly what inventory you have and where it is, with stock thresholds and alerts that ensure you always know when to replenish to prevent lost sales.

“We have a couple of centers that used to have to log in to about 17 different systems to view our operations. Now they essentially go to just a single screen.”

VP of worldwide business systems, motion controls technology

4. On-time delivery

B2B customers have high expectations for on time, in-full (OTIF) delivery, and some may have pre-established SLAs to hold you to that promise, with penalties when the order is not received on time. Failing to meet these expectations can impact your brand’s reputation, customer satisfaction, and future sales. With an order management solution that pulls together a real-time, multi-enterprise view of your inventory, you feel confident to meet your customers’ promises and expectations.

5. Grow your business

Scalability is important to meet increasing expectations, changing strategies, and organizational growth. An order and inventory management solution transforms your ERP to face these challenges head on. Whether you have an unusually high “peak” season, or your competitor has a recall that hurts their sales but skyrockets yours – the system can easily handle the automation, inventory fluctuation, and OTIF delivery promise. A scalable system gives you the capacity you need to succeed today and grow for tomorrow.

IBM B2B, IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Preparation, IBM Career

Order and inventory management solutions are perfect for B2B companies because they automate the typically manual, paper-based system that an ERP delivers. And the best benefit of all? Order and inventory management solutions are built with your customers’ experience in mind.

Transform your ERP with real-time inventory control to meet customer promises

Source: ibm.com

Saturday 17 April 2021

IBM researchers investigate ways to help reduce bias in healthcare AI

Artificial intelligence keeps inching its way into more and more aspects of our life, greatly benefitting the world. But it can come with some strings attached, such as bias.

AI algorithms can both reflect and propagate bias, causing unintended harm. Especially in a field such as in healthcare.

Also Read: C2090-556: Informix 11.50 Application Developer

Real-world data give us a way to understand how AI bias emerges, how to address it and what’s at stake. That’s what we have done in our recent study, focused on a clinical scenario, where AI systems are built on observational data taken during routine medical care. Often such data reflects underlying societal inequalities, in ways that are not always obvious – and this AI bias could have devastating results on patients’ wellbeing.

Our team of researchers, from IBM Research and Watson Health, have diverse backgrounds in medicine, computer science, machine learning, epidemiology, statistics, informatics and health equity research. The study — “Comparison of methods to reduce bias from clinical prediction models of postpartum depression,” recently published in JAMA Network Open — takes advantage of this interdisciplinary lineup to examine healthcare data and machine learning models routinely used in research and application.

We analyzed postpartum depression (PPD) and mental health service use among a group of women who use Medicaid, a health coverage provider to many Americans. We evaluated the data and models for the presence of algorithmic bias, aiming to introduce and assess methods that could help reduce it, and found that bias could create serious disadvantages for racial and ethnic minorities.

We believe that our approach to detect, assess and reduce bias could be applied to many clinical prediction models before deployment, to help clinical researchers and practitioners use machine learning methods fairer and more effectively.

What is AI fairness, anyway?

Over the past decades, there has been a lot of work addressing AI bias. One landmark study recently showed how an algorithm, which was built to predict which patients with complex health needs would cost the health system more, disadvantaged Black patients due to unrecognized racial bias in interpreting the data. The algorithm used healthcare costs incurred by a patient as a proxy label to predict medical needs and provide additional care resources. While this might seem logical, it does not account for the fact that Black patients had lower costs at the same level of need as white patients in the data and were therefore missed by the algorithm.

To deal with the bias — to ‘debias’ an algorithm — researchers typically measure the level of fairness in AI predictions. Fairness is often defined with respect to the relationship between a sensitive attribute, such as a demographic characteristic like race or gender, and an outcome. Debiasing methods try to reduce or eliminate differences across groups or individuals defined by a sensitive attribute. IBM is leading this effort by creating AI Fairness 360, an open-source python toolkit that allows researchers to apply existing debiasing methods in their work.

But applying these techniques is not trivial.

There is no consensus on how to measure fairness, or even on what fairness means, which is shown by conflicting and incompatible metrics of fairness. For example, should fairness be measured by comparing what the model predicted or the accuracy of the models? Also, in most cases it is not clearly known how and why outcomes differ by sensitive attributes like race. As a result, a great deal of prior work has been done using simulated data or by using simplified examples that do not reflect the complexity of real-world scenarios in healthcare.

So we decided to use a real-world scenario instead. As researchers in healthcare and AI, we wanted to demonstrate how recent advances in fairness-aware machine learning approaches can be applied to clinical use cases so that people can learn and use those methods in practice.

Debiasing with Prejudice Remover and reweighing

PPD affects one in nine women in the US who give birth, and early detection has significant implications for maternal and child health. Incidence is higher among women with low socioeconomic status, such as Medicaid enrollees. Despite prior evidence indicating similar PPD rates across racial and ethnic groups, under-diagnosis and under-treatment has been observed among minorities on Medicaid. Varying rates of reported PPD reflect the complex dynamics of perceived stigma, cultural differences, patient-provider relationships and clinical needs in minority populations.

We focused on predicting postpartum depression and postpartum mental health care use among pregnant women in Medicaid. We used the IBM MarketScan Research Database, which contains a rich set of patient-level features for the study.

Our approach had two components. First, we assessed whether there was evidence of bias in the training data used to create the model. After accounting for demographic and clinical differences, we observed that white females were twice as likely as Black females to be diagnosed with PPD and were also more likely to use mental health services, post-partum.

This result is in contrast to what is reported in medical literature — that the incidence of postpartum depressive symptoms is comparable or even higher among minority women — and possibly points to disparity in access, diagnosis and treatment.

It means that unless there is a documented reason to believe that white females with similar clinical characteristics to Black females in this study population would be more susceptible to developing PPD, the observed difference in outcome is likely due to bias arising from underlying inequity. In other words, machine learning models built with this data for resource allocation will favor white women over Black women.

IBM Researchers, IBM Tutorial and Material, IBM Learning, IBM Career, IBM Prep
We then successfully reduced this bias by applying debiasing methods called reweighing and Prejudice Remover to our models through the AI Fairness 360 Toolkit. These methods mitigate bias by reducing the effect of race in prediction through weighting training data or modifying algorithm’s objective function.

We compared the two methods to the so-called Fairness Through Unawareness (FTU) method that simply removes race from the model. We quantified fairness using two different methods to overcome the limitations of imperfect metrics. We showed that the two debiasing methods resulted in models that would allocate more resources to Black females compared to the baseline or the FTU model.

As we’ve shown, clinical prediction models trained on potentially biased data could produce unfair outcomes for patients. In conducting our research we used the types of ML models increasingly applied to healthcare use cases, so our results should get both researchers and clinicians think about bias issues and ways to mitigate any possible bias before implementing AI algorithms in care.

Source: ibm.com

Thursday 15 April 2021

New open source tool automates compliance

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Learning, IBM Preparation

Chief Information Security Officers are hounded by two questions:

Is my company’s technology compliant?

And:

Are all of the cloud products and services our company uses compliant?

Compliance continues to be a major issue inhibiting cloud adoption across enterprises, especially those working in highly regulated areas such as government, finance or healthcare. In the healthcare sector, for example, a provider may want to secure patient-related medical data on the cloud. And that company has to know whether the cloud technology is HIPAA compliant or covers other security requirements.

Also Read: C2010-555: IBM Maximo Asset Management v7.6 Functional Analyst

Compliance, both regulatory and self-imposed, is an area where there is a technology trend to “Shift Left” (developers’ term for the effort to prevent compliance issues, not just detect them) into the development process, and compliance controls. By building compliance into the DevOps workflow, developer teams can save time while creating secure and low-risk code. To help these developers minimize the risk of noncompliance, our team developed Trestle, an open-source tool for managing compliance as code, using continuous integration and the National Institute of Standard and Technology’s (NIST) Open Security Controls Assessment Language (OSCAL). Trestle was created to help developer teams with the challenges of IT compliance, which frequently includes:

◉ Relying on human labor-driven processes for compliance as opposed to “codifying” it.

◉ Many control implementations for each control each of which are unique within organizations when lacking a standardized interpretation of compliance.

◉ Documentation that is hand crafted for each audit and recreated for each and every audit.

◉ Heavy reliance on human labor to collect evidence of compliance when requested by auditors or assessors.

Today, this challenge of compliance requirements is conflated by the increasing expectation and scope, both of the market and of government regulators.

Three keys to streamlining the compliance process:

1. For tooling and platforms to be opinionated, enforcing a particular interpretation of a control, to provide consistent best practice.

2. For evidence of compliance to be automatically collected and visualised.

3. For compliance posture and documentation to be stated once and reused within the organisation, such that there is always one authoritative source identified.

For these first two factors, IBM has released a number of tools within the past year, from the IBM Security and Compliance Center to Auditree. The challenge that remains is the documentation and, critically, how to minimize duplicate efforts for documentation, to create a single source of truth.

Across IT development and delivery there is an increasing trend towards managing various artifacts (such as configuration, and infrastructure) as code – whether it is infrastructure managed as code through Ansible and Terraform; continuous integration through Tekton; or deployments through Helm and Kubernetes. ‘As code’ patterns are a key enabler of agile development. In effect, it is unifying what was previously documentation as code — and manage it as code. However, compliance has stubbornly resisted this trend, in part because the underlying formats (such as spreadsheets and pdfs), are focused on human rather than machine interpretability. The emergence of OSCAL provides an open standard for compliance, which addresses this.

Trestle was created to manage compliance, and compliance documentation as code, to allow compliance to co-exist in the same world as the developer. We have adopted the emerging OSCAL standard,  and the latest 1.0.0rc2 version by NIST, to act as the single source of truth. OSCAL artifacts allow documentation of the full lifecycle of compliance from documenting standards such as NIST 800-53 to the report auditors would receive.

The challenge we quickly realized is that OSCAL is confusing to end users – the NIST 800-53 catalogue published by NIST is over 70,000 lines of JSON. To this end, Trestle seeks to make it easier to deal with OSCAL. It includes a Python library to manipulate OSCAL objects with strong consistency guarantees, as well as a set of command-line interface tools to make it easier to manipulate OSCAL. The latter allows users to deal with smaller fragmentary OSCAL artifacts in a clean way where users are never required to copy and paste. Trestle can aggregate information and publish it for a user in a standardized and structured format.

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Learning, IBM Preparation

Another feature is Trestle tasks, a curated set of automated compliance workflows. A few current examples include:

◉ Collect information, together with Auditree, from the OpenShift compliance-operator, and transform it into an OSCAL assessment result.

◉ Transform data coming from ‘Tanium’ endpoint management into an assessment result.

◉ Manage OSCAL artifacts under an the same automated “semantic release” approach taken by many projects.

Source: ibm.com

Tuesday 13 April 2021

4 challenges impacting the healthcare supply chain

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Certification, IBM Career

COVID-19 has highlighted structural weaknesses in the healthcare system and, most notably, a persistent issue with the capacity and resilience of healthcare supply chains. Between 59-83% of organizations have reported delays or increased lead times in acquiring supplies since the onset of the pandemic. In response, 81 percent of these organizations adjusted their inventories, most by increasing inventory levels, to weather the demand fluctuations and disruptions.

Several major risk factors or underlying challenges have come into focus for healthcare supply chains:

Lack of resilience – COVID-19 exposed a need for greater supply chain resiliency. IDC emphasizes the importance for healthcare organizations to have greater adaptability to shifting pandemic conditions while positioning for the “next normal” post-pandemic. Of course, mitigating supply chain disruptions in healthcare have significant consequences for the bottom-line and patient care. Perhaps that’s why a recent survey showed that supply chain disruptions are now healthcare CEOs second-highest priority behind patient safety.

Lack of visibility – The lack of resiliency across healthcare organizations often stems from poor visibility, specifically a lack of quick access to centralized, consumable, real-time data from dispersed data sources and siloed systems. This makes it difficult to determine what’s needed, what’s in stock, and the scope of future demand. Ultimately, you can’t manage what you can’t see and measure.

Cost management – During the pandemic, as demand increased for personal protective equipment (PPE) and medical supplies, costs soared. Now, supply expenses are forecasted to surpass personnel costs as the biggest expense in healthcare. Studies show that most inventory decisions made to adjust to a disruptionare suboptimal and nearly half are unnecessary. This isn’t surprising, given the lack of visibility and that health systems typically order supplies based on historical models and physician preference, rather than actual utilization and expected demand. This leads to waste, delayed procedures and high inventory and carrying costs.

Integration and interoperability – Integration challenges— from an organizational, process, and technology perspective — also contribute to cost increases and visibility issues. Data integration across disparate ERP, legacy supply chain systems, and external sources, along with interoperability across tools such as RFID barcode readers that feed product data back into these systems, is needed to connect the dots.

Integration challenges are further exacerbated by mergers and acquisitions (M&A) – and the fallout from COVID-19 is expected to accelerate M&A in healthcare. M&A provides significant upside potential for growth and cost savings, but it also leads hospital systems to manage fragmented teams, technologies, and processes. Improved integration and interoperability enable better decision-making and allow an organization to better collaborate and get ahead of disruptions.

Despite these challenges, advanced technology solutions offer an expedited path to transformation and resiliency. Healthcare and supply chain leaders are already acting to address these challenges and improve their organizations in the process. According to IDC, 62 percent of hospitals increased spend on supply chain applications and 64 percent of executives called their cloud-based supply chain management applications “business-critical.”

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Certification, IBM Career
AI-enabled supply chain control towers are garnering particular interest. In fact, 88 percent of healthcare executives identified AI as a ‘critical’ technology for their supply chains in the next three years. AI-backed control towers can help an organization gain dynamic visibility across the network, improve the ability to sense demand changes and disruptions, and support inventory management and decision-making to improve patient care.

AI-enabled supply chain control towers can help meet the challenges presented by the pandemic by leveraging five key capabilities:

◉ End-to-end visibility

◉ Intelligent forecasting and demand sensing

◉ Touchless planning and improved productivity

◉ Elevated planning and automation

◉ Creating a collaborative ecosystem

Deploying control towers is not just about meeting the challenges exposed by the pandemic. They’re also a key component in transitioning to a more digital and data-driven environment and meeting the challenges and opportunities of the future. Combining visibility, automation and integration across the supply chain with a control tower unlocks the next frontier: a healthcare ecosystem where supply chain connectivity and collaboration make it possible to better manage through, and even get ahead of future crises and healthcare challenges.

Source: ibm.com

Sunday 11 April 2021

How microbiome analysis could transform food safety

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Preparation

“Tell me what you eat and I will tell you what you are.”

It was back in 1826 that Jean Anthelme Brillat-Savarin, French lawyer and politician, used this phrase in his book The Physiology of Taste. Fast forward to today, and ‘we are what we eat’ is still used when it comes to describing our microbiome — the community of our gut bacteria, influenced by our every meal. When the food we eat is unsafe, these bacteria wreck havoc inside of us.

But everything we eat also has a microbiome of its own.

It’s these food microbiomes that we wanted to study, using DNA and RNA sequencing to profile microbiomes in the supply chain anywhere along the process from farm to table — in a bid to help improve the safety of the global food supply chain. A complex multi-party network, its monitoring and regular testing can reveal fluctuations indicating an ingredient’s bad quality or a potential hazard. Catching these anomalies at any stage in the supply chain is important, whether in raw ingredients or later in the chain.

To evaluate the use of the microbiome as a hazard indicator for raw food ingredient safety and quality, we used a new type of untargeted sampling. Our team, composed of scientists from IBM Research, the Mars Global Food Safety Center, Bio-Rad Laboratories and consulting professor Dr. Bart Weimer of the UC Davis School of Veterinary Medicine, wanted to see whether the microbiome could indicate a potential issue or deviation from normal in the supply chain and help predict outbreaks.

The latest results, published in the Nature partner journal Science of Food are promising.

We developed a new technique with specific quality control processes such as bioinformatic host filtering for food and other microbiome studies with mixed or unknown host material. We’ve shown that the food microbiome could indeed help improve food quality control, as well as support a positive shift in microbial risk management, moving from a reactive approach to a predictive and preventative one. This way, it could help ensure safe food by preventing contamination and illness and reducing food waste. Our results have greatly increased the microbial identification accuracy in validation studies on simulated data, and the method is robust for multiple microbiome types, including biomedical, environmental and agricultural samples.

Sequencing our food

As the cost of high-throughput sequencing continues to drop, it has become an increasingly accurate and effective technology to investigate food quality and safety in the supply chain. Many methods employed at scale today involve targeted molecular testing, for example, the Polymerase Chain Reaction (PCR) or bacterial growth assays, or pathogen-isolate genome sequencing.

However, the microbes of interest do not exist in isolation. They are part of an ecosystem of microbes within the microbiome. Microbiome studies have expanded greatly in recent years and are continuing to find new links to human health, the environment, and agriculture.

To kick-off the study, in 2014 IBM Research and Mars co-founded the Consortium for Sequencing the Food Supply Chain. This partnership is supporting best practices in quality and food safety using genomics and big data to better understand the microbiome and its links to global food supply chains.

First, we developed a way to sequence food for accurate authentication of ingredients and detection of contaminants. We then expanded that work to characterize the microbiome. We sequenced RNA from 31 raw poultry meal ingredients (high protein powder) samples, ranging from a raw materials supplier to the pet food industry, over multiple seasons. This allowed us to identify the baseline of core microbes expected to be present for this sample type. During this monitoring, we also observed that a small number of samples contained unexpected contents. These outlier samples showed marked differences in their microbiomes with additional organisms present, as well as differences in microbial abundance composition.

The microbiome served us as a sort of a detective lens for food quality and safety with greater sensitivity than current tests. Separating the microbial sequences from a highly abundant food sample — be it corn, chicken, or something else — with sufficiently representative reference databases is critical for interpreting the data.

We addressed these challenges using bioinformatic filtering of the food-derived sequences without requiring any a priori knowledge of expected content and by augmenting publicly available microbial reference genome databases.

A functional genomics layer

Beyond the genotypic classification, this work also used the IBM Functional Genomics Platform to layer on biological function data to describe key growth genes involved in Salmonella replication. This genetic analysis was used to compare to current culture-based assays and build bridges in our understanding between these existing and emerging technologies.

When the sequenced reads were examined in the context of an augmented reference collection of Salmonella genomes from the IBM Functional Genomics Platform, we observed improved separation between culture-positive and negative samples, demonstrating the utility of developing comprehensive reference databases of microbial genomes.

Source: ibm.com

Saturday 10 April 2021

Hybrid cloud strategy requires on premises consideration

IBM Exam Prep, IBM Certification, IBM Learning, IBM Preparation, IBM Career

Cloud computing continues to grab all the headlines, but on-premises IT infrastructure remains vitally important and will continue to be well into the future. Even as they move workload to the cloud, most organizations plan to increase investments in their on-premises computing infrastructure.

Although these assertions have long been my opinion, they are also backed up by research. IBM recently commissioned Forrester Consulting to survey IT decision-makers on their hybrid cloud strategy. You can read the insights obtained in the Forrester Consulting study: The Key to Enterprise Hybrid Cloud Strategy for yourself by downloading it here.

A perhaps unexpected reality is being exposed as organizations trek toward the cloud. And that is that the lowest cost option is not always moving workload to the cloud. Indeed, incorporating existing infrastructure into a hybrid strategy can reduce overall IT costs because it minimizes the need for specialized skills and improves productivity.

Most executives are looking to improve speed and agility as they adopt and embrace agile development and DevOps management into their organization. Adopting a standard toolchain that works across your enterprise for both cloud and on-premises workloads is the smart strategy.

Such an approach can help to assure that your organization does not fall prey to the pitfalls outlined in the Forrester study, where lack of reinvestment can expose your firm to issues such as:

◉ Security vulnerabilities

◉ Higher costs

◉ Compatibility restrictions

◉ Diminished performance

In 2021 we are still reeling from dealing with the COVID-19 pandemic, and surely the Forrester study recognizes this fact, but your hybrid cloud strategy needs to be more far-reaching. Cloud adoption is not just a reaction to the work from home requirement imposed on the world by the pandemic. The cloud was growing well before 2019! And we need to be aware that post-pandemic will require a “new normal” that requires a solid hybrid cloud strategy in order to succeed.

So, don’t forget the “hybrid” part of your “cloud” strategy. And as I’ve said before, don’t let cloud exuberance stop other IT infrastructure investments! As organizations move to a hybrid enterprise cloud infrastructure, keeping up to date with technology refreshes is more important than ever before to assure cost-effective, efficient IT. This means continuing to invest in, and keep pace with, upgrades to your existing IT infrastructure. Reasons that on-premises computing will continue to thrive include:

◉ Data residency

◉ Data gravity

◉ Existing on premises capacity

◉ Security and compliance requirements

IBM Exam Prep, IBM Certification, IBM Learning, IBM Preparation, IBM Career

The Forrester study reports that a vast majority of respondents stated that delaying on-premises infrastructure upgrades in the past five years resulted in negative repercussions. That makes absolute sense to me, and it should to you, too! Especially if you rely on IBM Z for your mission-critical work. And if you’ve ever booked a flight, accessed your bank account using an ATM, or used a credit card to buy something, then you’ve relied on an IBM Z mainframe. The IBM Z platform runs the world’s economy and all of the workload is not going to be moved to the cloud any time soon.

So, don’t waste any time creating your hybrid cloud strategy . . . and discover additional insights along the way!

Source: ibm.com

Friday 9 April 2021

Empowering a new generation of IBM Z developers

IBM Tutorial and Material, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Career

Every day, billions of transactions occur online. Many of them are processed by a mainframe. The mainframe has proven to be an integral part of the world’s economy, enabling global financial, medical and retail institutions to drive their businesses forward. And the platform continues to evolve with open standards and support of modern developer languages, toolchains, and management practices.

Cultural and social changes are driving digital transformations, opening up opportunities for more roles in IT infrastructure. In a recent Forrester Consulting study commissioned by Deloitte, most companies view their mainframe as a strategic component to hybrid cloud environments and 91% of the business interviewed identified expanding mainframe footprints as a moderate/critical priority in the next 12 months.  These new roles and opportunities in IT Infrastructure are being filled by a new generation of computing talent.

Here at IBM we are committed to inspiring, educating and supporting new generations as they begin their computing careers. Recently, IBM wrapped up the 16th annual Master the Mainframe competition. Twelve winners emerged from high school and university students competing across the globe in a “coding obstacle course” with three levels of challenges. The coding challenges offered participants a chance to use and build skills in VS Code, Zowe, Python, JCL, REXX, and COBOL. And it was all completed on the mainframe—a platform new to many students but leveraging a suite of developer tools consistent with current computer science curriculums.

“The mainframe really has a huge impact on your life,” said Melissa Christie, a North America regional winner. “It’s a hidden entity that controls a large number of things in your day to day, from medical records to transactions. It’s really interesting that this thing that we have in our society does everything, but most people don’t even know about it.”

The mainframe continues to be a highly secure, resilient and reliable platform for mission-critical applications. Today, enterprises are optimizing these applications for hybrid, multicloud environments, while maintaining stability, security and agility. IBM Z allows developers to have control over the full life cycle of application development, no matter where the application is running. And new developers are taking notice. Pierre Jacquet, a 2020 Master the Mainframe Grand Prize winner, said one of the highlights of the contest was the ability to work with multiple coding languages.

“I mainly learned about the new development environment for the mainframe,” said Jacquet. “Discovering the development environment with Zowe was really cool and was a way to be more agile to facilitate development. I was quite impressed by the smoothness of the tools.”

IBM Tutorial and Material, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Career

For enterprises to continue successful digital transformations, there needs to be a talent pool familiar with the platform and who have been exposed to career roles within the Z environment. The entire Z community is teaming to bring these opportunities to students and the future workforce. M&T Bank recently shared their transformation story with CIO.com, focused on their collaboration with IBM and investment in the future workforce.

For employers looking to fill IT roles, IBM created Talent Match, an online platform that enables employers to find skilled workers. There are additional employer resources available on the IBM Z Employer Hub. While the student competition is over, the Master the Mainframe remains open to the public as a learning platform year-round for anyone interested in developing their skills to prepare for an enterprise computing career. Students can continue learning and networking at the IBM Z Global Student Hub and explore all of the IBM Z education initiatives.

Source: ibm.com

Thursday 8 April 2021

IBM

IBM RXN for Chemistry: Unveiling the grammar of the organic chemistry language

IBM Tutorial and Material, IBM Exam Prep, IBM Certification, IBM Career, IBM Preparation

Talk to any organic chemist, and they will tell you that learning organic chemistry is like learning a new language, with grammar similar to a myriad of chemical reaction rules. And it’s is also about intuition and perception, much like acquiring a language as a child.

In our paper “Extraction of organic chemistry grammar from unsupervised learning of chemical reactions,” published in the peer-reviewed journal Science Advances, scientists from IBM Research Europe, the MIT-IBM Watson AI Lab, and the University of Bern for the first time extracted the ‘grammar’ of organic chemistry’s ‘language’ from a large number of organic chemistry reactions. For that, we used RXNMapper, a cutting-edge, open-source atom-mapping tool we developed. RXNMapper performs better than the current commercially available tools, and learns without human supervision.

Cracking the language code with the Rosetta Stone

In the 19th century, the Rosetta Stone provided the starting point for scholars to crack the code of hieroglyphics, the ancient Egyptian writing system that combines logographics, syllabic and alphabetic elements. While scholars were able to quickly translate the 54 lines of Greek and 32 lines of demotic inscribed on the stone, it took years to fully decipher the 14 lines of hieroglyphs. British scholar Thomas Young made a major breakthrough in 1814, but it was Frenchman Jean-Francois Champollion who delivered a full translation in 1822. Deciphering those 14 lines through translation mapping with the other two languages written on the Rosetta Stone was enough to reconstruct the grammar and give scholars a window into a flourishing period of Egyptian language and culture.

Read More: C2090-623: IBM Cognos Analytics Administrator V11

Fast forward to today, the Rosetta Stone experience is equivalent to traveling to a foreign country to learn the native language through total immersion. The more you as the ‘scholar’ interact with the locals, their dialect, culture, customs, even street signs, the more you begin to recognize and map the recurring patterns in the structure of the language, its colloquial phrases and pronunciations, without a formal language course. Spend enough time in Germany, for example, and you will begin to notice the similarities in vocabulary between English and German, or structural differences such as the placement of the unconjugated second verb at the end of a phrase. This is where modern English deviates from German despite its Germanic roots.

IBM Tutorial and Material, IBM Exam Prep, IBM Certification, IBM Career, IBM Preparation
Figure 1: Top: A mapping between an English phrase and the German translation. Bottom: A mapping between reactants (methanol + benzoic acid) and a product molecule (methyl benzoate) in a chemical reaction represented with a text-based line notation called SMILES.

The natural process of language acquisition or becoming fluent in a foreign language is essentially the mapping of various linguistic elements to understand the connection between the individual words, expressions and concepts and how their precise order maps to your mother tongue.

Coming back to the language of organic chemistry, we asked ourselves two basic but very important questions: What if there was a possibility to visualize the mapped patterns that you’ve learned? What if the rules of a language could be extracted from these learned patterns?

It may be impossible to extract this information from the human brain, but we thought it possible when the learner is a neural network model, such as a reaction ‘Transformer.’ We let the model learn the language of chemical reactions by repeatedly showing it millions of examples of chemical reactions. We then unboxed the trained artificial intelligence model by visually inspecting the learned patterns, which revealed that the model had captured how atoms rearrange during reactions without supervision or labeling. From this atom rearrangement signal, we extracted the rules governing chemical reactions. We found that the rules were similar to the ones we learn in organic chemistry.

IBM Tutorial and Material, IBM Exam Prep, IBM Certification, IBM Career, IBM Preparation
Figure 2: Overview of the study and analogy between learning a new language and learning organic chemistry reactions.

The power of Transformer models


In 2018, we created a state-of-the-art online platform called RXN for Chemistry using Natural Language Processing (NLP) architectures in synthetic chemistry to predict the outcome of chemical reactions. Specifically, we used Molecular Transformer, where chemical reactions are represented by a domain-specific language called SMILES (e.g., CO.O=C(O)c1ccccc1>>COC(=O)c1ccccc1). Back then, we framed chemical transformations as translations from reactants to products, similar to translating, say, English to German. The model architecture we used in this new work is very similar, which brought up another important question: why do Transformers work so well for chemistry?

Transformer models are so powerful because they learn to represent inputs (atoms or words) in their context. If we take our example from Figure 1, “See” (German for lake) has an entirely different meaning than “see” in English, despite the same spelling. Similarly, in chemistry, an oxygen atom will not always carry the same meaning. Its meaning is dependent on the context or the surrounding atoms, i.e., on the atoms in the same molecule and the atoms it interacts with during a reaction.

IBM Tutorial and Material, IBM Exam Prep, IBM Certification, IBM Career, IBM Preparation
Figure 3: Reaction Transformer model consisting of self-attention layers, each containing multiple heads. Attention patterns learned by different heads in the model.

Transformers are made of stacks of self-attention layers (Fig. 3). The attention mechanism is responsible for connecting concepts and making it possible to build meaningful representations based on the context of the atoms. Every self-attention layer consists of multiple ‘heads’ that can all learn to attend the context differently. In human language, one head might focus on what the subject is doing, another head on why, while a third might focus on the punctuation in the sentence. Learning to attend to different information in the context is crucial to understanding how the different parts of a sentence are connected to decipher the correct meaning.

RXNMapper – the ultimate atom-mapping tool


We then used this atom-mapping signal to develop RXNMapper, the new state-of-the-art, open-source atom-mapping tool. According to a recent independent benchmark study, RXNMapper outperforms what’s now commercially available. Considering the fact that the atom-mapping signal was learned without supervision, this is a remarkable result.

What impact will it have on the work of chemists? High-quality atom-mapping is an extremely important component for computational chemists. Hence, RXNMapper is an essential tool for traditional downstream applications such as reaction prediction and synthesis planning. Now we can extract the ‘grammar’ and ‘rules’ of chemical reactions from atom-mapped reactions, allowing a consistent set of chemical reaction rules to be constructed within days, rather than years, as is the case with manual curation by humans. Our RXNMapper is not only accurate, but it is also incredibly fast, mapping reactions at ~7ms/reaction. This makes it possible to map huge data sets containing millions of reactions within a few hours.

IBM Tutorial and Material, IBM Exam Prep, IBM Certification, IBM Career, IBM Preparation
Figure 4: RXNMapper atom-mapping illustration.

RXNMapper may be far from being the Rosetta Stone of chemistry, but it unveils the grammar contained in a coherent set of chemical reaction data in a way that we can experience full immersion. If organic chemistry isn’t a language, then tell us…what is it?

Give RXNMapper a try on our online demo, and make sure to star our open-source code on GitHub.

Source: ibm.com

Tuesday 6 April 2021

IBM Quantum systems accelerate discoveries in science

IBM Exam Prep, IBM Preparation, IBM Learning, IBM Career, IBM Tutorial and Material, IBM Learning

Today, computation is central to the way we carry out the scientific method. High-performance computing resources help researchers generate hypotheses, find patterns in large datasets, perform statistical analyses, and even run experiments faster than ever before. Logically, access to a completely different computational paradigm — one with the potential to perform calculations intractable for any classical computer — could open up an entirely new realm for scientific discovery.

As quantum computers extend our computational capabilities, so too do we expect them to extend our ability to push science forward. In fact, access to today’s limited quantum computers has already provided benefits to researchers worldwide, offering an unprecedented look at the inner workings of the laws that govern how nature works, as well as a new lens through which to approach problems in chemistry, simulation, optimization, artificial intelligence, and other fields.

Here, we demonstrate the utility of IBM Quantum hardware as a tool to accelerate discoveries across scientific research, as shown at the American Physical Society’s March Meeting 2021. The APS March Meeting is the world’s largest physics conference, where researchers present their latest results to their peers and to the wider physics community. As a leading provider of quantum computing hardware, IBM’s quantum systems powered 46 non-IBM presentations in order to help discover new algorithms, simulate condensed matter and many-body systems, explore the frontiers of quantum mechanics and particle physics, and push the field of quantum information science forward overall. With this year’s APS March Meeting in mind, we believe that research access to quantum hardware — both on-site and via the cloud — will become a core driver for exploration and discovery in the field of physics in the coming years.

IBM’s quantum systems powered 46 non-IBM presentations in order to help discover new algorithms, simulate condensed matter and many-body systems, explore the frontiers of quantum mechanics and particle physics, and push the field of quantum information science forward overall.

IBM Quantum systems

At IBM Quantum, we build universal quantum computing systems for scientists, engineers, developers, and businesses. Our initiative operates a fleet of over two dozen full-stack quantum computing systems ranging from 1 to 65 qubits based on the transmon superconducting qubit architecture. These systems incorporate state of the art control electronics and a continually evolving software in order to offer the best-performing quantum computing services in the world. Our team released our development roadmap, demonstrating how we plan not only to scale processors up, but how to turn these devices into transformative computational tools.

IBM offers access to its quantum computing systems through several avenues. Our flagship program is our IBM Quantum Network, including our hubs, which collaborate with IBM on advancing quantum computing research, our industry partners, who explore a broad set of potential applications, and our members, who seek to build their general knowledge of quantum computing. At the broadest level, members of our community use the IBM Quantum Composer and IBM Quantum Lab programming tools, as well as the Qiskit open source software development kit to build and visualize quantum circuits and run quantum experiments on a dozen smaller devices. Researchers can also receive priority system access through our IBM Quantum Researchers Program.

Through the Network, the Researchers Program, and the quantum programming tools available to the broader community, IBM offers a range of support in order to facilitate the research and discovery process. This includes, but is not limited to, direct collaboration with our quantum researchers on projects, consultation on potential topic-specific use cases, and fostering the open source community passionate to advance the field of quantum computation.

Developing an ecosystem around cloud-based quantum access

As quantum computers mature, their physical requirements will necessitate that most users remotely access them and can program them in a frictionless way — that is, reap their benefits without needing to be a quantum mechanics expert. Quantum computing outfits across the industry are developing quantum systems in anticipation of this developing ecosystem. Access to these cloud-based computers will be of chief importance to three key developer segments: quantum kernel developers, seeking to understand quantum computers and their underlying mechanics to the level of logic circuits; quantum algorithm developers, employing these circuits to find potential advantages over existing classical computing algorithms and to push the limits of computing overall, and model developers, who apply these algorithms to perform research on real-world use cases in fields like physics, chemistry, optimization, machine learning, and others.

While IBM is developing our own ecosystem through accessible services on the IBM Cloud, we think that quantum access is important beyond our own communities. We’ve developed Qiskit to run application modules on any quantum computing platform, even other architectures such as trapped-ion devices. Ultimately, our goal is to democratize access to quantum computing, while providing the best hardware and expertise to all of those who hope to do research with and on our devices.

Using quantum computers for discovery, today

The multitude of presentations leveraging IBM Quantum at the APS March Meeting demonstrate not only adoption of IBM’s quantum computers as a platform for research by institutions outside of IBM, but more importantly, that the ability to access and run programs on these devices via the cloud is already advancing science and research today. Experiments on our systems spanned each of our projected developer segments, from kernel developers researching quantum computing itself, algorithm developers, as well as model developers employing quantum computing as a means to approach other problems in physics and beyond.

Quantum simulation

The innately quantum nature of qubits means that even noisy quantum computers serve as powerful analog and digital simulators of quantum mechanics, such as those studied in quantum many-body and condensed matter physics. Quantum computers are arguably already providing a quantum advantage to researchers in these fields, who are able to tackle problems with a simulator whose properties more closely align with the systems they wish to study versus a classical computer. IBM Quantum systems played a central role in many of these cutting-edge studies at APS March.

For example, in her presentation, “Scattering in the Ising Model with the Quantum Lanczos Algorithm”, Oak Ridge National Lab’s Kubra Yeter Aydeniz simulated one-particle propagation and two-particle scattering in the ubiquitous one-dimensional Ising model of particles in one of two spin states, here with periodic boundary conditions. Her team employed an algorithm to calculate the energy levels and eigenstates of the system, gathering information on particle numbers for spatial sites and transition amplitudes as well as the transverse magnetization as a function of time.

Benchmarking and characterizing noisy quantum systems

As quantum computers grow in complexity, simulating their results classically will grow more difficult, in turn hampering our ability to tell whether they’ve successfully run a circuit. Researchers are therefore devising methods to characterize and benchmark the performance of near-term quantum computers overall — and hopefully develop methods that will continue to be applicable as quantum computers increase in size and complexity. A series of APS March talks demonstrated benchmarking methods applied to IBM’s quantum devices.

In one such talk, “Scalable and targeted benchmarking of quantum computers” Sandia National Lab’s Timothy Proctor presented his scalable and flexible benchmarking technique that expanded on the IBM-devised Quantum Volume metric, in order to capture the potential tradeoff between increasing a circuit’s depth (the number of time-steps worth of gates) versus its width (the number of qubits employed). By employing randomized mirror circuits — those composed of a random series of one- and two-qubit operations, followed by the inverse of those operations — the team developed a benchmarking strategy that would efficiently work on quantum computers of 100s or perhaps 1,000s of qubits.

Algorithmic Discovery

We hope that, one day, quantum computers will employ superposition, entanglement, and interference in order to provide new ways to solve traditionally difficult problems. Today, scientists are working to develop algorithms that will provide those potential speedups—with an eye toward what sorts of benefits they may gather from algorithms they can run on present-day devices. IBM’s quantum devices served as the ideal testbed for teams looking for a system with which to develop hardware-aware algorithms.

For example, in the NSF-funded work “Rodeo Algorithm for Quantum Computation”, Jacob Watkin presented a new approach to the ubiquitous quantum phase estimation algorithm called the Rodeo Algorithm, targeted at near-term quantum devices. The algorithm, meant to generalize the famous Kitaev Phase Estimation Algorithm, employs stochastically varying phase shifts in order to achieve results at short gate depths.

Advancing Quantum Computing

Perhaps the most popular use of IBM’s quantum systems at the APS March Meeting was as a foundation upon which to study the inner workings of quantum devices, including characterizing noise, testing the fidelity of the chips, developing error correction and mitigation strategies, and other research meant to advance the field as a whole. We hope that the advances gleaned from studying our devices will benefit the field overall.

In “Error mitigation with Clifford quantum-circuit data”,  Piotr Czarnik from Los Alamos National Laboratory proposed a new error mitigation method for gate-based quantum computers. The method begins by generating training data from quantum circuits built only from Clifford group gates, then creates a linear fit to the data that can predict noise free observables for arbitrary noisy circuits. Czarnik’s team demonstrated an order-of-magnitude error reduction for a ground state energy problem by running their error mitigation strategy on the 16 qubit ibmq_melbourne system.

…And more

Access to a controllable quantum system offers researchers a new way to think about problems across physics. For example, in “Collective Neutrino Oscillations on a Quantum Computer”, Shikha Bangar demonstrated that quantum resources can serve as an efficient way to represent a particle physics system — collective neutrino oscillations. Meanwhile, in “Quantum Sensing Simulation on Quantum Computers using Optimized Control”, Paraj Titum from The Johns Hopkins University Applied Physics Laboratory developed new protocols to detect signals over background noise, and demonstrated the protocol on an IBM quantum computer.

The Future

The IBM Quantum team is thrilled knowing that our hardware is accelerating scientific progress around the world—and we continue to push progress on our own hardware in order to keep these discoveries flowing. The APS March meeting also served as a venue for our researchers to present some of the ideas they’re developing for future quantum systems, including advanced packaging technologies, novel qubit coupling architectures, and even qubits a tiny fraction of the size of our current transmons.  We also used the very same IBM Quantum systems to drive progress in improving quantum volume, demonstrations of algorithms and quantum advantage, and exploration of dynamic circuits and quantum error correction. The interplay between the end-users of IBM’s systems and the researchers developing the next generation of processors helps keep IBM’s devices cutting-edge and relevant in the months and years to come.

Access to quantum computing systems is advancing science, even in this early era of noisy quantum computers. This applies to more than just IBM’s systems; scientists at the APS March meeting presented results based on access to other superconducting architectures such as Rigetti’s, as well as trapped-ion qubit systems like those built by Honeywell. Our analysis of the 2021 APS March Meeting’s results demonstrates that investment into and use of existing cloud-based quantum computing platforms provides researchers with a powerful tool for scientific discovery. We expect the pace of discovery to accelerate as quantum computing systems and their associated cloud-based quantum ecosystem matures.

Source: ibm.com

Monday 5 April 2021

IBM researchers use epidemiology to find the best lockdown duration

IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Study Materials, IBM Preparation, IBM Career

We finally have vaccines, but prevention strategies and mitigation of spread of the virus will stay for the foreseeable future, in the form of lockdowns. While effective for helping to deal with disease spread, the duration of lockdowns during the current pandemic has been typically chosen through empirical observation of symptoms.

Read More: C2010-555: IBM Maximo Asset Management v7.6 Functional Analyst

But is it the best way?

Our team at IBM Research, in collaboration with the team of Dr. Ira Schwartz at the US Naval Research Laboratory, aims to provide an arguably more accurate approach to the optimal duration of lockdowns, based an epidemiology theory.

In a recent paper Optimal periodic closure for minimizing risk in emerging disease outbreaks published in PLoS One, we describe a new technique to calculate the optimal duration of a periodic lockdown during an outbreak of an infectious disease where there is no cure or vaccine. Our findings are different from the lockdown durations widely applied during COVID-19.

Using an epidemiological model and a new mathematical formulation, we’ve assessed the optimal duration of a lockdown to help minimize the spread of the virus — and found that it can vary between 10 and 20 days rather than the inflexible and imprecise current protocol of two weeks.

The rationale of the 14 day duration

During the current pandemic, nations often have imposed lockdowns based on the time it takes for symptoms to appear. This is estimated to be, at most, two weeks. The lockdown would then be periodically reassessed.

However, our findings are different.

We show that an optimal, data-driven way to help control an epidemic is by closing businesses, schools, and other public meeting places for a period roughly equal to two to four times the mean incubation period, or between 10 and 20 days, based on measurable local health factors. After that time, these places can be reopened for about the same period, until the outbreak is controlled and the disease is eradicated.

Importantly, this period depends on the so-called disease reproductive number, or R0, a measure of the potential of the disease to spread in a population. When R0 is larger than 1, the disease spreads and triggers an outbreak. When R0 is smaller than 1, the disease dies out after having been put under control.

We’ve found that the higher the value of R0, the longer the lockdown needs to be to curb the spread, and vice-versa. We’ve also found that when the reproductive number exceeds a certain threshold, the spread cannot be controlled by periodic lockdowns. This observation, which has never been suggested until now, may have important consequences not only for the current COVID-19 pandemic, but also for the next one, whenever it may happen.

“Control theory” for lockdowns

Not much work has been devoted to the lockdown duration until now. Some recent papers have suggested strategies for lockdowns, but they were mostly computational in nature. Our work, on the other hand, introduces a mathematical framework based on the theory of epidemiology for the assessment of the effect of lockdowns. As such, its application is general and can be used not only for COVID-19, but for any disease for which a periodic shutdown may be necessary to contain and slow community spread.

IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Study Materials, IBM Preparation, IBM Career
We used a mathematical approach called control theory, widely used in engineering (for example, for the design of aircrafts and ships), biology and artificial intelligence. We assume that the incidence of the disease — the number of infectious cases per day — is something that can be ‘controlled’ using periodic lockdowns as ‘controllers.’ We then determine the conditions a lockdown needs to meet for the total incidence to be minimized over the course of the outbreak.

Paired with a predictive model, like the one used in IBM Watson Works’ Return to Work Advisor that mixes rigorous epidemiological theory with AI, we believe that our research results can potentially make a difference between a large outbreak and a small one.

It’s clear that to control an outbreak of an infectious disease when there are no vaccinations or treatments, breaking contact is a must. We hope that our work will help to further reduce the contact rate and pave the way to determining an optimal cycle of lockdowns when the next pandemic hits.

Source: ibm.com