Showing posts with label IBM Cloud Pak for Data. Show all posts
Showing posts with label IBM Cloud Pak for Data. Show all posts

Tuesday, 13 June 2023

5G network rollout using DevOps: Myth or reality?

Dell EMC, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Guides, Dell EMC Learning, Dell EMC Preparation Exam

The deployment of Telecommunication Network Functions had always been a largely manual process until the advent of 5th Generation Technology (5G). 5G requires that network functions be moved from a monolithic architecture toward modularized and containerized patterns. This opened up the possibility of introducing DevOps-based deployment principles (which are well-established and adopted in the IT world) to the network domain.

Even after the containerization of 5G network functions, they are still quite different from traditional IT applications because of strict requirements on the underlying infrastructure. This includes specialized accelerators (SRIOV/DPDK) and network plugins (Multus) to provide the required performance to handle mission-critical, real-time traffic. This requires a careful, segregated network deployment process into various “functional layers” of DevOps functionality that, when executed in the correct order, provides a complete automated deployment that aligns closely with the IT DevOps capabilities.

This post provides a view of how these layers should be managed and implemented across different teams.

The need for DevOps-based 5G network rollout


5G rollout is associated with the below requirements that make it mandatory to brutally automate the deployment and management process (as opposed to the traditional manual processes in earlier technologies such as 4G):

◉ Pace of rollout: 5G networks are deployed at record speeds to achieve coverage and market share.

◉ Public cloud support: Many CSPs use hyperscalers like AWS to host their 5G network functions, which requires automated deployment and lifecycle management.

◉ Hybrid cloud support: Some network functions must be hosted on a private data center, but that also the requires ability to automatically place network functions dynamically.

◉ Multicloud support: In some cases, multiple hyperscalers are necessary to distribute the network.
Evolving standards: New and evolving standards like Open RAN adoption require continuous updates and automated testing.

◉ Growing vendor ecosystems: Open standards and APIs mean many new vendors are developing network functions that require continuous interoperability testing support.

All the above factors require an extremely automated process that can deploy/re-deploy/place/terminate/test 5G network functions on demand. This cannot be achieved with the traditional way of manually deploying and managing network functions.

Four layers to design with DevOps principles


There are four “layers” that must be designed with DevOps processes in mind:

Dell EMC, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Guides, Dell EMC Learning, Dell EMC Preparation Exam

1. Infrastructure: This layer is responsible for the deployment of cloud (private/public) infrastructure to host network functions. This layer will automate the deployment of virtual private cloud, clusters, node groups, security policies, etc. that are required by the network function. This layer will also ensure the correct infrastructure type is selected with the CNIs required by the network function (e.g., SRIOV, Multus, etc.)

2. Application/network function: This layer is responsible for installing network functions on the infrastructure by running helm-type commands and post-install validation scripts. It also takes care of the major upgrades on the network function.

3. Configuration: This layer takes care of any new Day 2 metadata/configuration that must be loaded on the network function. For example, new metadata to be loaded to support slice templates in the Policy Charging Function(PCF).

4. Testing: This layer is responsible for running automated tests against the various functionalities supported by network functions.

Each of the above layers has its own implementation of DevOps toolchains, with a reference provided in the diagram above. Layer 1 and 2 can be further enhanced with a GitOps-based architecture for lights-out management of the application.

Best practices


It is very important that there is a well-defined framework with the scope, dependencies, and ownership of each layer. The following table is our view on how it should be managed:

As you can see, there are dependencies between these pipelines. To make this end-to-end process work efficiently across multiple layers, you need an intent-based orchestration solution that can manage dependencies between various pipelines and perform supported activities in the surrounding CSP ecosystem, such as Slice Inventory and Catalog.

   Pipeline
Infrastructure Application  Configuration  Testing 
Scope
(Functionality to automate)
VPC, subnets, EKS cluster, security groups, routes CNF installation, CNF upgrades CSP slice templates, CSP RFS templates, releases and bug fixes Release testing, regression testing
Phase
(Applicable network function lifecycle phase)
Day 1 (infrastructure setup)  Day 0 (CNF installation), Day 1 (CNF setup)  Day 2+, on-demand  Day 2+, on-demand 
Owner
(Who owns development and maintenance of pipeline?) 
IBM/cloud vendor  IBM/SI  IBM/SI  IBM/SI 
Source control
(Place where source artifacts are stored. Any change triggers the pipeline, depending on the use case) 
Vendor detailed design  ECR repo (images), Helm package  Code commit (custom code)  Code commit (test data) 
Target integration (How the pipeline will interact during the execution process)  IaaC (e.g., Terraform), AWS APIs  Helm-based  RestConf/APIs   RestConf/APIs 
Dependency between pipelines  None  Infrastructure pipeline completed  Base CNF installed  Base CNF installed, release deployed 
Promotion of different environments  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod  Dev, Test/Pre-prod, Prod 

Telecommunications solutions from IBM


This post provides a framework and approach that, when orchestrated correctly, enables a completely automated DevOps-/GitOps-style deployment of 5G network functions.

In our experience, the deciding factor in the success of such a framework is the selection of a partner with experience and a proven orchestration solution.


Source: ibm.com

Thursday, 6 April 2023

What is IBM Cloud Pak for Business Automation?

IBM Cloud Pak for Business Automation, IBM Exam Study, IBM Career, IBM Skills, IBM Jobs, IBM Learning, IBM Tutorial and Materials

In today's fast-paced business environment, it is imperative to automate business processes to increase efficiency and productivity. IBM has introduced its Cloud Pak for Business Automation, which is a comprehensive and integrated platform that helps organizations automate their business processes. This article aims to provide an in-depth understanding of IBM Cloud Pak for Business Automation.

Overview of IBM Cloud Pak for Business Automation


IBM Cloud Pak for Business Automation is an integrated platform that provides businesses with tools to automate their processes, manage their workflows, and gain insights into their operations. It offers a set of pre-built and customizable automation capabilities that can be deployed in a hybrid cloud environment. The platform leverages artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) technologies to automate complex business processes.

Benefits of IBM Cloud Pak for Business Automation


IBM Cloud Pak for Business Automation offers several benefits to businesses, including:

Increased Efficiency

The platform helps businesses automate their processes, reducing manual intervention and increasing efficiency. It allows businesses to streamline their workflows, reduce errors, and improve accuracy.

Improved Visibility

IBM Cloud Pak for Business Automation provides businesses with real-time visibility into their operations. The platform offers analytics and reporting capabilities that help businesses gain insights into their processes and identify areas for improvement.

Increased Agility

The platform enables businesses to quickly adapt to changes in the market. It offers flexibility and scalability, allowing businesses to easily add or remove capabilities as needed.

Features of IBM Cloud Pak for Business Automation


IBM Cloud Pak for Business Automation offers several features that help businesses automate their processes. Some of the key features of the platform include:

Process Automation

The platform offers tools to automate end-to-end business processes. It enables businesses to model, monitor, and optimize their workflows, reducing manual intervention and increasing efficiency.

Decision Automation

IBM Cloud Pak for Business Automation offers tools to automate decision-making processes. The platform leverages AI and ML technologies to automate complex decision-making processes, improving accuracy and reducing errors.

Document Processing

The platform provides tools to automate document processing. It leverages NLP technologies to extract data from unstructured documents, reducing manual intervention and improving accuracy.

Robotic Process Automation

IBM Cloud Pak for Business Automation offers tools to automate repetitive tasks. The platform leverages robotic process automation (RPA) technologies to automate tasks such as data entry, reducing errors and increasing efficiency.

Deployment Options for IBM Cloud Pak for Business Automation


IBM Cloud Pak for Business Automation can be deployed in a hybrid cloud environment, offering businesses flexibility and scalability. The platform can be deployed on-premises, in a public cloud, or in a private cloud, depending on the business's requirements.

Conclusion

IBM Cloud Pak for Business Automation is a comprehensive and integrated platform that helps businesses automate their processes, manage their workflows, and gain insights into their operations. The platform offers several benefits, including increased efficiency, improved visibility, and increased agility. It provides businesses with a set of pre-built and customizable automation capabilities that can be deployed in a hybrid cloud environment.

Thursday, 9 March 2023

Innocens BV leverages IBM Technology to Develop an AI Solution to help detect potential sepsis events in high-risk newborns

IBM, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Guides, IBM Learning

From the moment of birth to discharge, healthcare professionals can collect so much data about an infant’s vitals—for instance, heartbeat frequency or every rise and drop in blood oxygen level. Although medicine continues to advance further, there’s still much to be done to help reduce the number of premature births and infant mortality. The worldwide statistics on premature births are staggering— the University of Oxford estimates that neonatal sepsis causes 2.5 million infant deaths annually.

Babies born prematurely are susceptible to health problems. Sepsis or bloodstream infection is life threatening and a common complication when admitted in a Neonatal Intensive Care Unit (NICU).

At Innocens BV, the belief is that earlier identification of sepsis-related events in newborns is possible, especially given the vast amount of data points collected from the moment a baby is born. Years’ worth of aggregated data in the NICU could help lead us to a solution. The challenge was gleaning relevant insights from the vast amount of data collected to help identify those infants at risk. This mission is how Innocens BV began in the Neonatal Intensive Care Unit (NICU) at Antwerp University Hospital in Antwerp, Belgium in cooperation with the University of Antwerp. The NICU at the hospital is associated closely with the University , and its focus is on improving care for premature and low birthweight infants. We joined forces with a Bio-informatics research group from the University of Antwerp and started taking the first steps in developing a solution.

Using IBM’s technology and the expertise of their data scientists along with the knowledge and insights from the hospital’s NICU medical team, we kicked off a project to further develop the ideas into a solution that was aimed at using clinical signals that are routinely collected in clinical care to aid doctors with the timely detection of patterns in such data that are associated with a sepsis episode. The specific approach we took required the use of both AI and edge computing to create a predictive model that could process years of anonymized data to help doctors make informed decisions. We wanted to be able to help them observe and monitor the thousands of data points available to make informed decisions.

How AI powers the Innocens Project


When the collaboration began, data scientists at IBM understood they were dealing with a sensitive topic and sensitive information. The Innocens team needed to build a model that could detect subtle changes in neonates’ vital signs while generating as few false alarms as possible. This required a model with a high level of precision that also is built upon  key principles of trustworthy AI including transparency, explainability, fairness, privacy and robustness.

Using IBM Watson Studio, a service available on IBM Cloud Pak for Data, to train and monitor the AI solution’s machine learning models, Innocens BV could help doctors by providing data driven insights that are associated with a potential onset of sepsis. Early results on historical data show that many severe sepsis cases can be identified multiple hours in advance. The user interface providing the output of the predictive AI model is designed to help provide doctors and other medical personel with insights on individual patients and to augment their clinical intuition.

Innocens worked closely with IBM and medical personel at the Antwerp University Hospital to develop a purposeful platform with a user interface that is consistent and easy to navigate and uses a comprehensible AI model with explainable AI capabilities. With the doctors and nurses in mind, the team aimed to create a model that would allow the intended users to reap its benefits. This work was imperative for building trust between the users and the instruments that would help inform a clinician’s diagnosis. Innocens also involved doctors in the development process of building the user interface and respected the privacy and confidentiality of the anonymous historical patient data used to train the model within a robust data architecture.

The technology and outcomes of this research project could have the potential to not only help the patients at Antwerp University Hospital, but to scale for different NICU centers and help other hospitals as they work to combat neonatal sepsis. Innocens BV is working in collaboration with IBM to explore how Innocens can continue to leverage data to help train transparent and explainable AI models capable of finding patterns in patient data, providing doctors with additional data insights and tools that help inform clinical decision-making.

The impact of the Innocens technology is being investigated in clinical trials and is not yet commercially available.

Source: ibm.com

Thursday, 29 September 2022

From principles to actions: building a holistic approach to AI governance


Today AI permeates every aspect of business function. Whether it be financial services, employee hiring, customer service management or healthcare administration, AI is increasingly powering critical workflows across all industries.

But with greater AI adoption comes greater challenges. In the marketplace we have seen numerous missteps involving inaccurate outcomes, unfair recommendations, and other unwanted consequences. This has created concerns among both private and public organizations as to whether AI is being used responsibly. Add navigating complex compliance regulations and standards to the mix, and the need for a solid and trustworthy AI strategy becomes clear.

To scale use of AI in a responsible manner requires AI governance, the process of defining policies and establishing accountability throughout the AI lifecycle. This in turn requires an AI ethics policy, as only by embedding ethical principles into AI applications and processes can we build systems based on trust.

IBM Research has been developing trustworthy AI tools since 2012. When IBM launched its AI Ethics Board in 2018, AI ethics was not a hot topic in the press, nor was it top-of-mind among business leaders. But as AI has become essential, touching on so many aspects of daily life, the interest in AI ethics has grown exponentially.

In a 2021 study by the IBM Institute of Business Value, nearly 75% of executives ranked AI ethics as important, a jump from less than 50% in 2018. What’s more, suggests the study, those organizations who implement a broad AI ethics strategy, interwoven throughout business units, may have a competitive advantage moving forward.

The principles of AI ethics


At IBM we believe building trustworthy AI requires a multidisciplinary, multidimensional approach based on the following three ethical principles:

1. The purpose of AI is to augment human intelligence, not replace it.

At IBM, we believe AI should be designed and built to enhance and extend human capability and potential.

2. Data and insights belong to their creator.

IBM clients’ data is their data, and their insights are their insights. We believe that data policies should be fair and equitable and prioritize openness.

3. Technology must be transparent and explainable.

Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations.

When thinking about what it takes to really earn trust in decisions made by AI, leaders should ask themselves five human-centric questions: Is it easy to understand? Is it fair? Did anyone tamper with it? Is it accountable? Does it safeguard data? These questions translate into five fundamental principles for trustworthy AI: fairness, robustness, privacy, explainability and transparency.

AI governance: From principles to actions


When discussing AI governance, it’s important to be conscious of two distinct aspects coming together:

Organizational AI governance encompasses deciding and driving AI strategy for an organization. This includes establishing AI policies for the organization based on AI principles, regulations and laws.

AI model governance introduces technology to implement guardrails at each stage of the AI/ML lifecycle. This includes data collection, instrumenting processes and transparent reporting to make needed information available for all the stakeholders.

Often, organizations looking for trustworthy solutions in the form of AI governance require guidance on one or both of these fronts.

Scaling trustworthy AI


Recently an American multinational financial institution came to IBM with several challenges, including deploying machine learning models in the hundreds that were built using multiple data science stacks comprised of open source and third-party tools. The chief data officer saw that it was essential for the company to have a holistic framework, which would work with the models built across the company, using all these diverse tools.

In this case IBM Expert Labs collaborated with the financial institution to create a technology-led solution using IBM Cloud Pak for Data. The result was an AI governance hub built at enterprise scale, which allows the CDO to track and govern hundreds of AI models for compliance across the bank, irrespective of the machine learning tools used.

Sometimes an organization’s need is more tied to organizational AI governance. For instance, a multinational healthcare organization wanted to expand an AI model that was being used to infer technical skills to now infer soft/foundational skills. The company brought in members of IBM Consulting to train the organization’s team of data scientists on how to use frameworks for systemic empathy, well before code is written, to consider intent and safeguard rails for models.

After the success of this session, the client saw the need for broader AI governance. With help from IBM Consulting, the company established its first AI ethics board, a center of excellence and an AI literacy program.

In many instances, enterprise-level organizations need a hybrid approach to AI governance. Recently a French banking group was faced with new compliance measures. The company did not have enough organizational processes and automated AI model monitoring in place to address AI governance at scale. The team also wanted to establish a culture to responsibly curate AI. They needed both an organizational AI governance and AI model governance solution.

IBM Consulting worked with the client to establish a set of AI principles and an ethics board to address the many upcoming regulations. This effort ran together with IBM Expert Labs services that implemented the technical solution components, such as an enterprise AI workflow, monitors for bias, performance and drift, and generating fact sheets for the AI models to promote transparency across the broader organization.

Establishing both organizational and AI model governance to operationalize AI ethics requires a holistic approach. IBM offers unique, industry-leading capabilities for your AI governance journey:

◉ Expert Labs for a technology solution that provides guardrails across all stages of the AI lifecycle
IBM Consulting for a holistic approach to socio-technological challenges

Source: ibm.com

Sunday, 30 January 2022

IBM Cloud Pak for Business Automation on Linux on Z and LinuxONE

IBM Cloud Pak, Linux on Z, LinuxONE, IBM, IBM Exam, IBM Exam Prep, IBM Preparation, IBM Career, IBM Skills, IBM Jobs, IBM Certification, IBM Study

We all witnessed the world change right before our very eyes. As a result, companies had to change the way they do business with a greater dependency on automation. As we learn to live with the new normal, we go into 2022 well equipped to support your continuous journey through automation.

When you take app modernization, co-location, and sustainability it creates the perfect recipe to support the evolution of your business model. Both the Covid-19 pandemic and the great resignation have made the IBM Cloud Pak for Business Automation on Linux on Z and LinuxONE critical to weathering the day-to-day business challenges. Business automation aids in achieving and retaining faster and more reliable business results.

IBM Cloud Pak for Business Automation allows the acceleration of application modernization by offering a set of services and capabilities intended to determine best practices, find blockers or ineffective processes.  These business challenges are resolved by automating and streamlining processes to improve your business productivity and decrease errors.

IBM Cloud Pak for Business Automation runs on Red Hat OpenShift, including Red Hat OpenShift on the IBM Linux on Z and LinuxONE platform. By leveraging Red Hat OpenShift, you have the option of easily moving or co-locating the application closer to the data by deploying on Z Linux. The IBM Z platform is renowned for bringing its best-in-class data privacy, security, availability, scalability, resiliency, and sustainability to our client’s hybrid multi-cloud approach.  IBM Z has a twenty-four-year history of improved energy efficiency. The IBM Z’s ability to continue to pivot and enhance the energy-efficiency, power conversion and cooling efficiency is critical to your business model. IBM Z and LinuxONE provide an economical and sustainable path for our clients to run on a cloud native platform with flexible computing to run on IBM Cloud Paks for Business Automation.

The following capabilities for IBM Cloud Paks for Business Automation are delivered as containers that run on Red Hat OpenShift on the IBM Z platform:

Content supports unstructured or semi-structured data comprising documents, text, images, audio, and video. Content services securely manages the full lifecycle of content.

◉ FileNet Content Manager

◉ Business Automation Navigator

◉ IBM Enterprise Records

Decisions provides repeatable rules and policies for day-to-day business operations, allowing you to gather, manage, execute, and monitor decisions.

◉ Operational Decision Manager

◉ Automation Decision Services

Workflow defines how work gets done through a sequence of steps performed by humans and systems. Workflow management is the design, execution and monitoring of workflows.

◉ Business Automation Workflow

◉ Automation Workstream Services

Operational Intelligence business automation insights provides deep understanding of business operations by capturing and analyzing data generated by operational systems. The data is presented in dashboards and made available to data scientists for analysis using AI and machine learning.

◉ Business Automation Insights

◉ Business Performance Center

Low-code Automation is a visual approach to building applications using drag-and-drop components. Low-code tools enable business users and developers to create applications without having to write code.

◉ Business Automation Studio

◉ Business Automation Application Designer

◉ Business Automation Application runtime

The plethora of capabilities offered for Cloud Paks for Business Automation means it’s important to have a reliable platform that provides scalability, resiliency, and sustainability. Running Cloud Paks for Business Automation on Red Hat OpenShift on IBM Z and LinuxONE helps you automate with ease.

Source: ibm.com

Thursday, 28 October 2021

Urban Institute and IBM help cities measure gentrification

IBM Exam Prep, IBM Exam Study, IBM Exam, IBM Preparation, IBM Tutorial and Materials, IBM Certification, IBM Guides, IBM Learning

Cities across the United States are increasingly seeing their local communities impacted because of widespread neighborhood change due to blight or to adverse effects of gentrification. While gentrification can often increase the economic value of a neighborhood, the potential threat it poses to local culture, affordability, and demographics has created significant and time-sensitive concerns for city governments across the country. Many attempts to measure neighborhood change up until now have been backwards-looking and rules-based, which can lead governments and community groups to make decisions based on inaccurate and out-of-date information. That’s why IBM partnered with the Washington D.C.-based nonprofit research organization Urban Institute, which for more than 50 years has led an impressive range of research efforts spanning social, racial, economic and climate issues at the federal, state, and local level.

Measuring neighborhood change as or before it occurs is critical for enabling timely policy action to prevent displacement in gentrifying communities, mitigating depopulation and community decline, and encouraging inclusive growth. The Urban Institute team recognized that many previous efforts to measure neighborhood change relied on nationwide administrative datasets, such as the decennial census or American Community Survey (ACS), which are published at considerable time lags. For that reason, the analysis could only be performed after the change happens, and displacement or blight has already occurred. Last year, the Urban Institute worked with experts in the US Department of Housing and Urban Development’s (HUD) Office of Policy Development and Research on a pilot project to assess whether they could use novel real-time HUD USPS address vacancy and Housing Choice Voucher (HCV) data with machine learning methods to accurately now-cast neighborhood change.

Together, the IBM Data Science and AI Elite and Urban Institute team built on that pilot to develop a new method for predicting local neighborhood change from the latest data across multiple sources, using AI. This new approach began by defining four types of neighborhood change: gentrifying, declining, inclusively growing, and unchanging. IBM and Urban Institute then leveraged data from the US Census, Zillow, and the Housing Choice Voucher program to train individual models across eight different metropolitan core based statistical areas, using model explainability techniques to describe the driving factors for gentrification.

The IBM Data Science and AI Elite team is dedicated to empowering organizations with skills, methods, and tools needed to embrace AI adoption. Their support enabled the teams to surface insights from housing and demographic changes across several metropolitan areas in a collaborative environment, speeding up future analyses in different geographies. The new approach demonstrated a marked improvement over the precision of older rules-based techniques  (from 61% to 74%) as well as the accuracy (from 71% to 74%). The results suggest a strong future for the application of data to improving urban development strategies.

The partnership put an emphasis on developing tools that enabled collaborative work and asset production, so that policymakers and community organizations could leverage the resulting approaches and tailor them to their own communities.

IBM Cloud Pak® for Data as a Service was used to easily share assets, such as Jupyter notebooks, between the IBM and Urban Institute teams. During the engagement with Urban Institute, the teams leveraged AutoAI capabilities in Watson Studio to rapidly establish model performance baselines before moving on to more sophisticated approaches. This capability is especially valuable for smaller data science teams looking to automatically build model pipelines and quickly iterate through feasible models and feature selection, which are highly time-consuming tasks in a typical machine learning lifecycle.

IBM Exam Prep, IBM Exam Study, IBM Exam, IBM Preparation, IBM Tutorial and Materials, IBM Certification, IBM Guides, IBM Learning

Together, this engagement and collaboration aims to empower the field to use publicly available data to provide a near real-time assessment of communities across the country. In addition to providing insights on existing data, the project can help uncover shortcomings in available data, enabling future field studies to fill the gaps more efficiently.

For more details on the results, check out our assets which provide an overview of how the different pieces fit together and how to use them. And if you want to dig deeper into the methods, read our white paper.

IBM is committed to advancing tech-for-good efforts, dedicating IBM tools and skills to work on the toughest societal challenges. IBM is pleased to showcase a powerful example of how social sector organizations can harness the power of data and AI to address society’s most critical challenges and create impact for global communities at scale. IBM’s Data and AI team will continue to help nonprofit organizations accelerate their mission and impact by applying data science and machine learning approaches to social impact use cases.

Interested in learning more? Discover how other organizations are using IBM Cloud Pak for Data to drive impact in their business and the world.

Source: ibm.com

Wednesday, 8 September 2021

Reimagining ocean research with the world’s first autonomous ship

IBM Cloud Pak, IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Study Material

Humans have been exploring the ocean for thousands of years, but now the power of AI can help unlock its mysteries more than ever. To commemorate the 400th anniversary of the Mayflower’s trans-Atlantic voyage, the Mayflower Autonomous Ship (MAS) will repeat the same journey—this time without any people onboard. The world’s first full-size autonomous ship will study uncharted regions of the ocean, and an AI Captain will be at the helm.

From Plymouth, England to Plymouth, Massachusetts, the crewless vessel will use explainable AI models to make accurate navigation decisions. The ship will collect live ocean data, delivering valuable research that can inform policies for climate change and marine conservation. Through IBM technologies, MAS makes all of this possible by advancing three areas vital to a successful mission: talent, trust, and data.

The future of the ocean is at stake

More than 3.5 billion people depend on the ocean as a primary source of food, and ocean-related travel makes up 90% of global trade. Since the 1980s, however, the ocean has absorbed 90% of the excess heat from global warming, endangering life both below and above the seas.

Protecting the ocean starts with understanding more data about its ecosystem, but this undertaking requires massive investment. MAS reduces the need for enormous resources in ocean research by using data and AI to augment human work (talent), navigate safely while meeting maritime regulations (trust), and fostering collaboration to develop actionable insights (data).

IBM Cloud Pak, IBM Exam Prep, IBM Certification, IBM Learning, IBM Career, IBM Study Material

Talent: Saving time and costs for scientists


A typical ocean research expedition can take six weeks with as many as 100 scientists onboard. Only one week is often spent on actual research. The rest of the time entails traveling to and back from destinations and sometimes managing bad weather and rough seas.

“Traditional research missions can be very expensive, limited in where they can explore and take a long time to collect data,” says IBM researcher Rosie Lickorish, who spent time on RSS James Cook as part of her Master’s in oceanography.

MAS significantly cuts down time and costs for scientists. A solar-powered vessel, it travels independently to collect data in remote and dangerous regions of the ocean. Researchers back on land can download live data and images synced to the cloud, such as whale songs or ocean chemistry detected by an “electronic tongue” called HyperTaste.

“With AI-powered sensors onboard that can analyze data as it’s collected, scientists can access more meaningful insights at greater speed,” says Lickorish. “The cost of data for our experts is low, in time as well as money.”

Trust: Navigating accurately with explainable AI


A combination of technologies helps MAS travel with precision: a vision and radar system scans the ocean and delivers data at the edge; an operational decision manager (ODM) enforces collision regulations; a decision optimization engine recommends next best actions; and a “watch dog” system detects and fixes problems.

This entire system makes the AI Captain intelligent, allowing it to make trusted navigational decisions driven by explainable AI. Rules-based decision logics in ODM validate and correct the AI Captain’s actions. A log tracks exactly which initial conditions were fed into ODM, which path it took through the decision forest, and which outcome was reached. This makes debugging and analyzing the AI Captain’s behaviors vastly easier than the “black box” AI systems that are common today.

Safety and compliance are key. For example, decision optimization through CPLEX on IBM Cloud Pak for Data, a unified data and AI platform, helps the ship decide what to do next. CPLEX considers constraints such as obstacles; their size, speed, and direction; weather; and how much power is left in batteries. It then suggests routes to ODM, which validates them or advises another course.

“ODM keeps the AI Captain honest and obeying the ‘rules of the road,’” says Andy Stanford-Clark, IBM Distinguished Engineer and IBM Technical Lead for MAS.

Data: Fostering collaboration for better insights


Once the mission is complete, researchers will use IBM Cloud Pak for Data to store data, apply governance rules to enhance data quality, manage user access and analyze data for actionable insights.

Having all data managed by a unified platform can enable greater collaboration for various project teams across ten countries. In addition, organizations and universities around the world can partner with the research teams, forming a grassroots coalition to advance measures that curb pollution and climate change.

Source: ibm.com