Saturday, 30 December 2023

Five machine learning types to know

Five machine learning types to know

Machine learning (ML) technologies can drive decision-making in virtually all industries, from healthcare to human resources to finance and in myriad use cases, like computer vision, large language models (LLMs), speech recognition, self-driving cars and more.

However, the growing influence of ML isn’t without complications. The validation and training datasets that undergird ML technology are often aggregated by human beings, and humans are susceptible to bias and prone to error. Even in cases where an ML model isn’t itself biased or faulty, deploying it in the wrong context can produce errors with unintended harmful consequences.

That’s why diversifying enterprise AI and ML usage can prove invaluable to maintaining a competitive edge. Each type and sub-type of ML algorithm has unique benefits and capabilities that teams can leverage for different tasks. Here, we’ll discuss the five major types and their applications.

What is machine learning?


ML is a computer science, data science and artificial intelligence (AI) subset that enables systems to learn and improve from data without additional programming interventions.

Instead of using explicit instructions for performance optimization, ML models rely on algorithms and statistical models that deploy tasks based on data patterns and inferences. In other words, ML leverages input data to predict outputs, continuously updating outputs as new data becomes available.

On retail websites, for instance, machine learning algorithms influence consumer buying decisions by making recommendations based on purchase history. Many retailers’ e-commerce platforms—including those of IBM, Amazon, Google, Meta and Netflix—rely on artificial neural networks (ANNs) to deliver personalized recommendations. And retailers frequently leverage data from chatbots and virtual assistants, in concert with ML and natural language processing (NLP) technology, to automate users’ shopping experiences.

Machine learning types


Machine learning algorithms fall into five broad categories: supervised learning, unsupervised learning, semi-supervised learning, self-supervised and reinforcement learning.

1. Supervised machine learning

Supervised machine learning is a type of machine learning where the model is trained on a labeled dataset (i.e., the target or outcome variable is known). For instance, if data scientists were building a model for tornado forecasting, the input variables might include date, location, temperature, wind flow patterns and more, and the output would be the actual tornado activity recorded for those days.

Supervised learning is commonly used for risk assessment, image recognition, predictive analytics and fraud detection, and comprises several types of algorithms.

  • Regression algorithms—predict output values by identifying linear relationships between real or continuous values (e.g., temperature, salary). Regression algorithms include linear regression, random forest and gradient boosting, as well as other subtypes.
  • Classification algorithms—predict categorical output variables (e.g., “junk” or “not junk”) by labeling pieces of input data. Classification algorithms include logistic regression, k-nearest neighbors and support vector machines (SVMs), among others.
  • Naïve Bayes classifiers—enable classification tasks for large datasets. They’re also part of a family of generative learning algorithms that model the input distribution of a given class or/category. Naïve Bayes algorithms include decision trees, which can actually accommodate both regression and classification algorithms.
  • Neural networks—simulate the way the human brain works, with a huge number of linked processing nodes that can facilitate processes like natural language translation, image recognition, speech recognition and image creation.
  • Random forest algorithms—predict a value or category by combining the results from a number of decision trees.

2. Unsupervised machine learning

Unsupervised learning algorithms—like Apriori, Gaussian Mixture Models (GMMs) and principal component analysis (PCA)—draw inferences from unlabeled datasets, facilitating exploratory data analysis and enabling pattern recognition and predictive modeling.

The most common unsupervised learning method is cluster analysis, which uses clustering algorithms to categorize data points according to value similarity (as in customer segmentation or anomaly detection). Association algorithms allow data scientists to identify associations between data objects inside large databases, facilitating data visualization and dimensionality reduction.

  • K-means clustering—assigns data points into K groups, where the data points closest to a given centroid are clustered under the same category and K represents clusters based on their size and level of granularity. K-means clustering is commonly used for market segmentation, document clustering, image segmentation and image compression.
  • Hierarchical clustering—describes a set of clustering techniques, including agglomerative clustering—where data points are initially isolated into groups and then merged iteratively based on similarity until one cluster remains—and divisive clustering—where a single data cluster is divided based on the differences between data points.
  • Probabilistic clustering—helps solve density estimation or “soft” clustering problems by grouping data points based on the likelihood that they belong to a particular distribution.

Unsupervised ML models are often behind the “customers who bought this also bought…” types of recommendation systems.

3. Self-supervised machine learning

Self-supervised learning (SSL) enables models to train themselves on unlabeled data, instead of requiring massive annotated and/or labeled datasets. SSL algorithms, also called predictive or pretext learning algorithms, learn one part of the input from another part, automatically generating labels and transforming unsupervised problems into supervised ones. These algorithms are especially useful for jobs like computer vision and NLP, where the volume of labeled training data needed to train models can be exceptionally large (sometimes prohibitively so).

4. Reinforcement learning

Reinforcement learning, also called reinforcement learning from human feedback (RLHF), is a type of dynamic programming that trains algorithms using a system of reward and punishment. To deploy reinforcement learning, an agent takes actions in a specific environment to reach a predetermined goal. The agent is rewarded or penalized for its actions based on an established metric (typically points), encouraging the agent to continue good practices and discard bad ones. With repetition, the agent learns the best strategies.

Reinforcement learning algorithms are common in video game development and are frequently used to teach robots how to replicate human tasks.

5. Semi-supervised learning

The fifth type of machine learning technique offers a combination between supervised and unsupervised learning.

Semi-supervised learning algorithms are trained on a small labeled dataset and a large unlabeled dataset, with the labeled data guiding the learning process for the larger body of unlabeled data. A semi-supervised learning model might use unsupervised learning to identify data clusters and then use supervised learning to label the clusters.

Generative adversarial networks (GANs)—deep learning tool that generates unlabeled data by training two neural networks—are an example of semi-supervised machine learning.

Regardless of type, ML models can glean data insights from enterprise data, but their vulnerability to human/data bias make responsible AI practices an organizational imperative.

Manage a range of machine learning models with watstonx.ai


Nearly everyone, from developers to users to regulators, engages with applications of machine learning at some point, whether they interact directly with AI technology or not. And the adoption of ML technology is only accelerating. The global machine learning market was valued at USD 19 billion in 2022 and is expected to reach USD 188 billion by 2030 (a CAGR of more than 37 percent).

The scale of ML adoption and its growing business impact make understanding AI and ML technologies an ongoing—and vitally important—commitment, requiring vigilant monitoring and timely adjustments as technologies evolve. With IBM® watsonx.ai AI studio, developers can manage ML algorithms and processes with ease.

IBM watsonx.ai—part of the IBM watsonx AI and data platform—combines new generative AI capabilities and a next-generation enterprise studio to help AI builders train, validate, tune and deploy AI models with a fraction of the data, in a fraction of the time. Watsonx.ai offers teams advanced data generation and classification features that help businesses leverage data insights for optimal real-world AI performance.

In the age of data proliferation, AI and machine learning are as integral to day-to-day business operations as they are to tech innovation and business competition. But as new pillars of a modern society, they also represent an opportunity to diversify enterprise IT infrastructures and create technologies that work for the benefit of businesses and the people who depend on them.

Source: ibm.com

Thursday, 28 December 2023

Accelerate release lifecycle with pathway to deploy: Part 2

Accelerate release lifecycle with pathway to deploy: Part 2

As enterprises embrace cloud native and everything as code, the journey from code to production has become a critical aspect of delivering value to customers. This process, often referred to as the “pathway to deploy,” encompasses a series of intricate steps and decisions that can significantly impact an organization’s ability to deliver software efficiently, reliably and at scale.

The first post in this series navigates the complexities and uncovers the strategies and target state mode for achieving a seamless and effective pathway to deploy.

This post expands on the topic and provides a maturity model and building blocks that help enterprises accelerate their software supply chain lifecycle in the ever-evolving landscape of enterprise cloud-native software development.

Pathway to deploy roadmap


To realize an accelerated pathway to deploy, there are several moving parts and stakeholders that must come together. We recommend a 4-stage roadmap for implementation, as shown in the figure below.

Accelerate release lifecycle with pathway to deploy: Part 2

Stage 1: Development automation

Infrastructure automation (IaC) and pipeline automation are self-contained within the development team, which makes automation a great place to start. In this stage, the focus is building an enterprise catalog of continuous integration, deployment and testing (CI/CD/CT) and Ops patterns with necessary tooling integrations to automate core development and testing activities. Given enterprise complexity, the most difficult part of this stage is the automation of testing capabilities (wherein test data preparation and execution of test cases across multiple systems is mostly semi-automated). Overall Cloud Capability Centre (CCC), or the equivalent core team, plays a significant role in driving change with application and platform teams.

Stage 2: Institutionalize pattern-driven model

CCC (or its equivalent) works with the architecture board to establish a suite of repeatable patterns (including atomic patterns representing individual cloud services, as well as composite application patterns comprising of multiple cloud services). The architecture review process (along with other related review processes) is modified to institutionalize pattern-centric architecture representations with a backlog established for different groups (such as platform engineering and CCC) to build these patterns as code. This helps adoption and acceleration. Over time, the applications being represented appear as a set of patterns that standardizes development models across the board. In addition, teams such as business continuity, resiliency and security will leverage those patterns (for example, highly available multi-region architectures) to recognize and accelerate approval gates with a standardized approach. They key to this alignment is the co-creation of these patterns between participating organizations.

Stage 3: Self-service and cross-functional integration

Enterprises have many organizations that want to see that cloud applications follow their guidance and best practices. This stage focuses on integrating cross-functional teams (such as security, compliance and FinOps) through automation, tooling, codified patterns or self-service options. This builds on the earlier stages to emphasize meaningful participation between teams. The key aspects of this stage are to:

  • Build and align high-availability patterns with resiliency teams where reviews get accelerated, demonstrating adherence to these patterns.
  • Codify security and compliance requirements into the patterns and guardrail them on the platform as set of policies.
  • Address validation by integrating tooling, such as vulnerability scans, policy check tools (like cloud formation guard for AWS) and container security with the pipelines following shift-left principles.
  • Enlist enterprise records teams to study a set of data classification and retention patterns and enlist FinOps teams to assess for appropriate tagging and quota adherence.
  • Build AuthN/AuthZ integration patterns that abstract nuances and standardize authentication and authorization of applications, data and services.
  • Automate firewall via generation of resource files from IaC execution & importing them to firewall systems as described here.
  • Platform Engineering enterprise catalog offering multiple self-serve capabilities.

Stage 4: Automated pathway to deploy

This stage focuses on decentralization and decoupling of various enterprise groups while simultaneously integrating them through automation and DevSecOps. One example is the automation of change management processes, including automated release notes generation, where the system autonomously constructs comprehensive change review checklists by aggregating data from multiple interconnected systems. This results in trust, efficiency and accuracy in reviews. This holistic approach represents a significant leap in operational efficiency and risk mitigation for the enterprise.

Pathway to deploy: Building blocks of the cloud-native model


Let’s explore a few use cases that showcase pathway to deploy acceleration.

Use case 1: Persona-centric IaC codification

Persona- and patterns-based IaC codification can accelerate both development and review phases. The figure below represents multiple stakeholders in an enterprise who have different concerns and requirements for cloud native workloads.

Accelerate release lifecycle with pathway to deploy: Part 2

It takes a lot of development time for product teams to manually code for each of these concerns, not to mention the time it takes for stakeholders to manually review each area. Codifying these in hardened discrete or composite patterns provides product teams the right Bootstrap code and acceleration, creating stakeholder trust and review efficiency.

Use case 2: Shift-left security and policies validation

Automate security, compliance and other policies for infrastructure as part of CI/CD pipeline. This ensures that deployed infrastructure will be aligned to enterprise policies even before it is deployed. There are multiple approaches provided by cloud providers and open-source tooling that can accomplish this (including Checkov, Cloud formation guard and cfn-nag). Typically, security teams codify policy validation rules, and product teams integrate policy validation within CI/CD/CT pipelines before the infrastructure is provisioned to cloud environment.

Use case 3: Automated compliance evidence collection for reviews

Cross-functional cloud platform, security and compliance teams build automation that enables evidence collection, accelerating security and compliance reviews. This would typically require leveraging Cloud APIs to query information from deployed cloud resources, as well as building compliance evidence and posture. Such capabilities could allow product teams to execute such automation in a self-service model or via DevOps pipelines and identify compliance posture, along with capturing review evidence automatically. The maturity level increases when evidence capture is executed automatically and the review is in a completely hands-free mode.

Use case 4: Integrated patterns and pipeline toolkit

Composite cloud-native patterns like AWS Active-Active Serverless APIs require several discrete patterns to come together. These patterns include:

  1. Cloud services, such as Route53, API Gateway, Lambda, Dynamo DB, IAM, CodeDeploy, CodeBuild, CodePipeline and CodeCommit.
  2. Nonfunctional Requirements, such as AuthN/AuthZ, multi-region active-active deployment, security at rest and in transit, tracing, logging, monitoring, dashboards, alerting, failover automation and health checks.
  3. Integrated enterprise tooling, including Code Quality, SAST, DAST, Alerting, Test Management, tracking and planning.

A one-click solution would allow product teams to select the right pattern, which will create the necessary Bootstrap code that integrates several codified patterns as described in prior use cases.

Pathways to deploy: Delivery approach


For a delivery model to realize pathway to deploy, the CCC (or equivalent) must work with multiple organization groups as shown in the figure below.

Accelerate release lifecycle with pathway to deploy: Part 2

Pathway to deploy delivery model would comprise of the following steps:

  1. Define the entire path to deploy process through a set of application lifecycle phases, activities, deliverables and dependent groups involved.
  2. Define and standup multiple squads focusing on different aspects on the pathway to deploy.
  3. Plan fora flex model within the squads to bring in supporting groups as required.
  4. Build a backlog for each of the Cloud Capability Center squads and initiate capability development.
  5. Align to the 4-stage maturity model so enterprises can track maturity.
  6. Establish product teams and relevant stakeholders as part of the backlog refinement and prioritization.
  7. Ensure that automation adoption is continuously focused. The success of pathway to deploy depends on building the automation and getting it adopted.
  8. Build central knowledge management and planning management around pathway to deploy.
  9. Make it easier for product teams to incorporate these activities into their delivery plans (with for project tracking and agile collaboration tools such as Jira).
  10. Build a measurement system for pathway to deploy phase gate SLAs and continuously track SLA improvement (as pathway to deploy capabilities mature over a period).

By considering why cloud transformation may not yield full value, and identifying release lifecycle acceleration as a key challenge, this narrows down the focus to pathway to deploy. Pathway to deploy can be a common vehicle that facilitates multiple groups to accelerate the entire software supply chain lifecycle beyond the development and testing lifecycle acceleration that exists today. A 4-stage roadmap has been defined where initial stages focus on DevSecOps and patterns adoption, and advanced stages mature towards product engineering culture. It is recommend that product teams collaborate with participating enterprise groups in a decentralized manner to leverage automation and self-service. The maturity model encourages organizations to incrementally scale by starting small, and our delivery approach brings predictable outcomes to this complex journey.

Source: ibm.com

Tuesday, 26 December 2023

Accelerate release lifecycle with pathway to deploy: Part 1

Accelerate release lifecycle with pathway to deploy: Part 1

For many enterprises, the journey to cloud reduces technical debt costs and meets CapEx-to-OpEx objectives. This includes rearchitecting to microservices, lift-and-shift, replatforming, refactoring, replacing and more. As practices like DevOps, cloud native, serverless and site reliability engineering (SRE) mature, the focus is shifting toward significant levels of automation, speed, agility and business alignment with IT (which helps enterprise IT transform into engineering organizations).

Many enterprises struggle to derive real value from their cloud journeys and may continue to overspend. Multiple analysts have reported that over 90% of enterprises continue to overspend in cloud, often without realising substantial returns.

The true essence of value emerges when business and IT can collaborate to create new capabilities at a high speed, resulting in greater developer productivity and speed to market. Those objectives require a target operating model. Rapidly deploying applications to cloud requires not just development acceleration with continuous integration, deployment and testing (CI/CD/CT), It also requires supply chain lifecycle acceleration, which involves multiple other groups such as governance risk and compliance (GRC), change management, operations, resiliency and reliability. Enterprises are continuously looking for ways that empower product teams to move from concept to deploy faster than ever.

Automation-first and DevSecOps-led approach


Enterprises often retrofit cloud transformation elements within existing application supply chain processes rather than considering new lifecycle and delivery models that are suited for speed and scale. The enterprises that reimagine the application lifecycle through an automation-first approach encourage an engineering-driven product lifecycle acceleration that realizes the potential of cloud transformation. Examples include:

  • Pattern-based architecture that standardizes the architecture and design process (while teams have the autonomy to choose patterns and technology or co-create new patterns).
  • Patterns that address security and compliance dimensions, ensuring traceability to these requirements.
  • Patterns-as-code that help codify multiple cross-cutting concerns (this also promotes the inner source model of patterns maturity and drive reusability).
  • DevOps pipeline-driven activities that can be utilized across the lifecycle.
  • Automatic generation of specific data needed for security and compliance reviews.
  • Operational-readiness reviews with limited or no manual intervention.

As enterprises embrace cloud native and everything as code, the journey from code to production has become a critical aspect of delivering value to customers. This intricate process, often referred to as the “pathway to deploy,” encompasses a series of intricate steps and decisions that can significantly impact an organization’s ability to deliver software efficiently, reliably and at scale. From architecture, design, code development, testing to deployment and monitoring, each stage in the pathway to deploy presents unique challenges and opportunities. As you navigate the complexities that exists today, IBM® aims to help you uncover the strategies and target state mode for achieving a seamless and effective pathway to deploy.

The best practices, tools, and methodologies that empower organizations to streamline their software delivery pipelines, reduce time-to-market, enhance software quality, and ensure robust operations in production environments will all be explored.

Pathway to deploy: Current view and challenges


The diagram below summarizes a view of enterprise software development life cycle (SDLC) with typical gates. While the flow is self-explanatory, the key is to understand that there are several aspects of the software supply chain process that make this a combination of waterfall and intermittent agile models. The challenge is that the timeline for build-deploy of an application (or an iteration of that) is impacted by several first- and last -mile activities that typically remain manual.

Accelerate release lifecycle with pathway to deploy: Part 1

The key challenges with the traditional nature of SDLC are:

1. Pre-development wait time of 4-8 weeks within architecture and design phase to get to development. This is caused by:

  • Multiple first-mile reviews to ensure no adverse business impacts, including privacy concerns, data classification, business continuity and regulatory compliance (and most of these are manual).
  • Enterprise-wide SDLC processes that remain waterfall or semi-agile, requiring sequential execution, despite agile principles in development cycles (for example, environment provisioning only after full design approval).
  • Applications that are perceived as “unique” are subject to deep scrutiny and interventions with limited opportunities for acceleration.
  • Challenges in institutionalizing patterns-based architecture and development due to lack of cohesive effort and change agent driving, such standardization.
  • A security culture that affects the speed of development, with adherence to security controls and guidelines often involving manual or semi-manual processes.

2. Development wait time to provision environment and CI/CD/CT tooling integration due to:

  • Manual or semi-automated environment provisioning.
  • Patterns (on paper) only as prescriptive guidance.
  • Fragmented DevOps tooling that requires effort to stitch together.

3. Post-development (last-mile) wait time before go-live is easily 6–8 weeks or more due to:

  • Manual evidence collection to get through security and compliance reviews beyond standard SAST/SCA/DAST (such as security configuration, day 2 controls, tagging and more).
  • Manual evidence collection for operation and resiliency reviews (such as supporting cloud operations and business continuity).
  • Service transition reviews to support IT service and incident management and resolution.

Pathway to deploy: Target state


The pathway to deploy target state requires a streamlined and efficient process that minimizes bottlenecks and accelerates software supply chain transformation. In this ideal state, the pathway to deploy is characterized by a seamless integration of design (first mile), as well as development, testing, platform engineering and deployment stages (last mile), following agile and DevOps principles. This helps accelerate deployment of code changes swiftly and automatically with necessary (automation-driven) validations to production environments.

IBM’s vision of target state prioritizes security and compliance by integrating security checks and compliance validation into the CI/CD/CT pipeline, allowing for early detection and resolution of vulnerabilities. This vision emphasizes collaboration between development, operations, reliability and security teams through a shared responsibility model. It also establishes continuous monitoring and feedback loops to gather insights for further improvement. Ultimately, the target state aims to deliver software updates and new features to end users rapidly, with minimal manual intervention and with a high degree of confidence for all enterprise stakeholders.

The diagram below depicts a potential target view of pathway to deploy that helps embrace the cloud-native SDLC model.

Accelerate release lifecycle with pathway to deploy: Part 1

Key elements of the cloud-native SDLC model include:

  • Pattern-driven architecture and design institutionalized across the enterprise.
  • Patterns that incorporate key requirements of security, compliance, resiliency and other enterprise policies (as code).
  • Security and compliance reviews that are accelerated as patterns and used to describe the solution.
  • Core development, including the creation of environments, pipelines and services configuration (which is driven through platform engineering enterprise catalog).
  • CI/CD/CT pipeline that builds linkages to all activities across pathway to deploy lifecycle.
  • Platform engineering builds-configures-manages platforms and services with all enterprise policies (such as encryption) embedded as platform policies.
  • Security and compliance tooling (for example, vulnerability scans or policy checks) and automation that is integrated to the pipelines or available as self-service.
  • Generation of a high degree of data (from logs, tool outputs and code scan insights) for several reviews without manual intervention.
  • Traceability from backlog to deployment release notes and change impact.
  • Interventions only by exceptions.

Pathway to deploy drives acceleration through clarity, accountability and traceability


By defining a structured pathway to deploy, organizations can standardize the steps involved in supply chain lifecycle, ensuring each phase is traceable and auditable. This allows stakeholders to monitor progress through distinct stages, from initial design to deployment, providing real-time visibility into the program’s status. Assigning ownership at each stage of the pathway to deploy ensures that team members are accountable for their deliverables, making it easier to track contributions and changes, as well as accelerating issue resolution with the right level of intervention. Traceability through the pathway to deploy provides data-driven insights, helping to refine processes and enhance efficiency in future programs. A well-documented pathway to deploy supports compliance with industry regulations and simplifies reporting, as each part of the process is clearly recorded and retrievable.

Source: ibm.com

Saturday, 23 December 2023

Unlocking success: Key components of a winning customer experience strategy

Unlocking success: Key components of a winning customer experience strategy

Customer experience strategy (CX strategy) is when organizations optimize customer engagements to create happy customers, drive customer loyalty and help to recruit new customers.

Providing a better customer experience takes into consideration the entire customer journey and every customer touch-point. It identifies new customers through awareness, consideration, and purchase, aims to retain customers, and drives word-of-mouth through the post-purchase phase.

Customer-centric organizations prioritize great customer experience as an important piece of their brand identity. Meeting customer expectations requires discipline and compassion across the entire customer journey map.

Creating happy customers should be a major business goal of every organization, as those types of customers are more likely to become repeat purchasers and make the effort to recommend products to their friends and family through word-of-mouth. Doing so increases the potential for profitability and customer retention.

Seven hallmarks of a successful customer experience strategy


Customer experience requires the right strategy and dedicated actions to drive success. Here are seven components every organization should include in their CX strategy:

1. Invest in the right technology

Automation and chatbots are two such technologies that are revolutionizing customer experience, especially with the increasing rise of artificial intelligence (AI) that adds more sophistication to these tools. As more customers look to solve their issues online or through self-service, organizations that fail to use advanced technologies to better serve customer needs will fail at customer experience. In a recent IBV CEO Guide to Generative AI for Customer Service study, CEOs identified customer service as the number one priority for incorporating generative AI investment.

2. Address pain points

Meeting customer needs is a key component of customer experience and the best way to avoid customer churn. While not every customer who is dissatisfied with a product will voice their bad experiences, those who do voice them expect the organization to address and solve them quickly. When customer support teams pay attention to issues raised by loyal customers, they are more likely to retain those customers and make them strong advocates for the brand.

3. Create personas

Every customer is different. A way organizations can better personalize their outreach is to group similar-minded customers into person groups so they can better target the right messaging and targeting. Examples of personas are those who are price sensitive, and likely to switch brands if the organization’s prices rise, or those who are early adopters, who buy the latest technologies as soon as they are available.

One way to create personas is to track customer interactions such as purchases, time of purchase, and types of purchases in a customer relationship management (CRM) database. CRMs help organizations better understand their customers and find ways to provide more value. A comprehensive and up-to-date CRM can identify whether a particular customer is ready to buy or whether a highly valuable customer may be in danger of switching brands.

4. Measure everything

Establishing and tracking key performance indicators (KPIs) is an important component of customer experience management. Organizations have several powerful ways to solicit and analyze customer feedback. It’s important to collect a variety of customer experience metrics to understand the user experience and track progress on key organizational goals. Here are some of the most important metrics to track:

  • Net promoter score (NPS): This customer data point identifies how likely a customer would be to recommend an organization’s products to friends and family. It is a good representation of how satisfied customers are, given that they would go out of their way to talk about the product with people in their orbit.
  • Customer satisfaction score (CSAT): This score focuses specifically on how happy a customer is with an organization’s products. It is often expressed as a percentage from 0-100, which enables organizations to track their improvements (or decline) over time.
  • Customer effort score: This is a metric that measures how much work it takes for a customer to interact with a business. This is effectively a customer support metric, which identifies how well that service is helping customers solve issues and get the most out of their products. Examples of issues that could affect a customer effort score are poor response times to customer questions, difficulty achieving technical support or long stock-outs that require a customer to routinely check back to see if the product they want is available.

5. Prioritize employee experience

Providing a good customer experience requires input and dedication from multiple stakeholders. Prioritizing employee experience—making sure employees are happy, well-trained and fairly compensated—is an important component of any customer experience strategy. Team members who are treated well and trained extensively are more likely to provide excellent customer care and go out of their way to serve their customers’ needs.

6. Embrace omnichannel customer relationships

Organizations have a variety of channels in which they can reach customers and build stronger relationships, and it is important to embrace this omnichannel customer service approach. For instance, consumers are increasingly spending time on digital experiences such as social media and mobile apps. That provides an opportunity for organizations to learn more about what they want and respond directly to their questions or complaints. Some organizations also create knowledge bases where customers can search for answers and solve their issues without needing to interact directly with a human worker.

7. Invest in customer success

Leading organizations realize that the post-purchase period can be just as important for the overall customer experience as the awareness and consideration phases. Customers who regret their purchases or have unsolved issues are less likely to become repeat customers. They also are less likely to recommend or promote those products and companies to their networks. That’s why organizations are increasingly investing in customer success teams that work directly with customers post-purchase to ensure they are maximizing the value they get out of their purchases.

Customer experience represents the pathway to repeat engagement


Providing a positive customer experience can become a competitive advantage, especially when customers have been more likely to switch brands since the height of the Covid-19 pandemic.

IBM has been helping enterprises apply trusted AI in this space for more than a decade, and generative AI has further potential to significantly transform customer and field service with the ability to understand complex inquiries and generate more human-like, conversational responses.

At IBM, we put customer experience strategy at the center of your business, helping you position it as a competitive advantage. With deep expertise in customer journey mapping and design, platform implementation, and data and AI consulting, we help you harness best-in-class technologies to drive transformation across the customer lifecycle. Our end-to-end consulting solutions span marketing, commerce, sales and service.

Source: ibm.com

Thursday, 21 December 2023

Business strategy examples

Business strategy examples

A successful business strategy dictates the allocation of resources and outlines how a company will achieve its strategic goals. Whether the organization is focused on developing new products or marketing an existing service to an under-served demographic, having a solid strategy will help an organization realize its long-term goals. Typically, a strategy will be informed by core business objectives and keep key performance indicators (KPIs) in mind. It’s also essential to understand an organization’s market position, as the following business strategy examples will show.

Types of business strategy


Over the last decades, researchers and business leaders have identified a handful of so-called “generic strategies” with broad application across the business landscape. These core business strategies are:

  • Broad cost leadership strategy
  • Broad differentiation strategy
  • Focused differentiation strategy
  • Focused cost leadership strategy

But there are dozens of variations on these core concepts, and an organization may choose to enact certain types of strategies at different points. Good business strategies are carefully considered, but that doesn’t mean they’re static. Successful leaders will routinely review a strategy’s key components and update their plans.

For instance, entrepreneurs looking to increase profits might pursue a cost-cutting strategy, while a business hoping to expand would consider a growth strategy. If customer churn or dissatisfaction is a particular issue, a customer retention strategy would be more appropriate.

For economically healthy companies attempting to move into new markets, a diversification strategy—involving new customers or product lines—or a partnership strategy—involving the acquisition of new companies—might be best.

Still, exploring the core generic strategies can provide insight into how some of the world’s most successful corporations have leveraged market research to create phenomenally profitable roadmaps. Some examples of business strategies that embody these foundational theories are explored below.

Broad cost leadership strategy example: Walmart


When Sam Walton, the founder of Walmart, started his retail career in the 1940s, he had a simple idea: To find less expensive suppliers than those who served his competition and pass those savings on to the customers in his variety stores. Where many business leaders might attempt to profit directly from such favorable margins, Walton decided to pursue an economy of scale, profiting by attracting more customers rather than charging those customers more. In the more than seven decades since, Walmart has become one of the most famous examples of cost leadership strategy, which undercuts competition by offering goods or services at the lowest possible price.

As the company grew, it was able to take advantage of its ubiquity to demand lower prices from suppliers and sell goods for even less over time. Many of these savings have then been passed on to customers shopping in the stores, resulting in progressively cheaper goods. The retailer’s advertisements underscore this fact, encouraging customers to “Save money. Live Better.”

By the early 2000s, Walmart’s cost leadership strategy had been so successful one-third of Americans were frequent Walmart customers, illustrating how winning the price game can lead to a massively successful bottom line. This has been crucial for the big-box retailer as it increasingly competes with e-commerce giants like Amazon.

Broad differentiation business strategy example: Starbucks


When Starbucks was founded as a small business in 1971, high-end coffee was a niche market in the United States. But Howard Schultz, the company’s founder, believed there was an opportunity to import Italian coffee culture and differentiate his business from competitors like Dunkin Donuts.

To gain a competitive advantage over stores offering cheap coffee in fast food-type settings, Schultz opened a series of cozy cafes that encouraged long visits. Though the items sold at Starbucks were more expensive than those of the competitors, they were highlighted in marketing campaigns as unique and superior quality goods. Starbucks also paid careful attention to its supply chain, ensuring is products were ethically sourced and offering specialty drinks that in some geographic locations could be difficult to find. The company’s early focus on talent management for service employees was a major differentiator, as well.

Over time Starbucks also focused heavily on personalization, encouraging customers to create favorite drinks. Later in the company’s tenure, the company introduced loyalty cards and other advantages for repeat customers to encourage customer retention.

Today, Starbucks stores are ubiquitous across the globe, and the company’s success has made it one of the prime examples of differentiation strategy that undercuts competition by providing a premium product that is significantly more desirable than existing goods.

Focused differentiation strategy example: REI


A focused differentiation strategy—unlike a broad differentiation strategy, which seeks to gain massive market share by providing a premium good—tailors its business plan to a select group of consumers. The outdoor outfitter REI has had significant success in focused differentiation through a series of business decisions and marketing strategies that underscore the values of its target demographic. In REI’s case, product differentiation relies on how the business communicates its core values and provides a unique customer experience.

REI frequently positions itself as an ethical and sustainable outdoor brand: As the company says, it prefers to put “purpose before profits.” Since its inception, the company has underscored initiatives like its co-op membership model and sustainability commitments as a way to distinguish itself from competitors catering to more general audiences. Recently, the brand engaged in a relatively risky marketing strategy that reflects its goal of capturing a specific group of loyal customers.

Starting in 2015, REI closed its stores on Black Friday, the most popular shopping day of the year, and encouraged employees to spend the day outdoors. The initiative was accompanied by a social media campaign to bolster the brand’s reach. REI might sell products at a higher cost than its competitors, and operate fewer than 200 stores, but its business model is based on the idea that a loyal group of customers will find its messages and products relevant enough to pay a premium for goods they could easily find somewhere else.

Focused cost leadership strategy example: Dollar General


Where Walmart’s cost leadership strategy relied on becoming ubiquitous and operating at massive scales, the discount chain Dollar General has captured price-conscious consumers in more specific markets. Rather than trying to enter an entirely new market, the company focused on providing low-cost goods to rural consumers. Its strategy has been to open small stores in areas where big-box stores might not be and offer a complementary pricing strategy that attracts budget-conscious consumers.

This strategy has allowed Dollar General to grow into a smart and efficient operation with a strong target market and relatively low overhead. Typically, the chain leases its stores and keeps them small and bare bones, saving money on real estate and extensive labor costs. Stores also typically stock a smaller number of products targeted to its specific customer base, cutting costs and allowing it precise control of its supply chain process. By spending less to open stores, allocating fewer resources to advertising, and targeting regional cost-conscious customers, the chain expanded successfully into a niche market.

The importance of agility in business strategy


As the previous effective business strategies illustrate, strategic planning is crucial for an organization working to achieve its business goals. A strong sense of where the company should be heading makes decision-making easier, and can guide operations across all business units, from the organization’s corporate-level strategy to its product development plans. At their most effective, business strategies can be utilized on a functional level, meaning every department from finance to human resources is guided by the business’ broader goals.

But not all successful businesses strategies will conform precisely to the four generic models outlined above. Often, a company will combine aspects of one or more strategies, or pivot as markets and technologies change. This has been particularly true for startups, which often serve a diverse set of stakeholders and may base their value proposition on new technologies. Still, as the above examples show, the optimization of a business’ operations relies on thinking critically about how its disparate parts can work together to achieve a singular goal.

Business strategy and IBM


Emerging technology and social forces are creating new customer experiences that result in changing expectations and demands and disrupt business models. IBM Consulting’s professional services for business help organizations navigate an increasingly dynamic, complex and competitive world by aligning transformation with business strategy to create competitive advantage and a clear focus on business impact.

Source: ibm.com

Wednesday, 20 December 2023

5 things to know: Driving innovation with AI and hybrid cloud in the year ahead

5 things to know: Driving innovation with AI and hybrid cloud in the year ahead

As we look ahead to 2024, enterprises around the world are undoubtedly evaluating their progress and creating a growth plan for the year to come. For organizations of all types—and especially those in highly regulated industries such as financial services, government, healthcare and telco—considerations including the rise of generative AI, evolving regulations and data sovereignty laws and ongoing security challenges must be top of mind.

As enterprises look to address these requirements and achieve growth while adopting innovative AI and hybrid cloud technologies, IBM will continue to meet clients wherever they are in their journeys by helping them make workload placement decisions based on resiliency, performance, security, compliance and cost.

Here are five things you need to know about how IBM Cloud is helping clients balance innovation with evolving regulations.

1. Addressing regulatory requirements and evolving data sovereignty laws


As nations enact laws designed to protect data, IBM is focused on helping clients address their local requirements while driving innovation at the same time. For example, IBM Cloud offers a variety of capabilities and services that can help clients address the foundational pillars of sovereign cloud including data sovereignty, operational sovereignty and digital sovereignty.

IBM Cloud Hyper Protect Crypto Services offers “keep your own key” encryption capabilities, designed to allow clients exclusive key control and help address privacy needs. With these capabilities, we aim to enable clients to control who has access to their data while still leveraging the benefits of the cloud. Additionally, IBM Cloud Satellite is designed to help clients meet their data residency requirements with the ability to run cloud services and applications across environments and wherever data is located: on-premises, in the cloud, or at the edge.

2. Securely balancing the influx of data from AI with innovation


At the same time, we’re helping clients embrace AI securely to fuel business transformation. Earlier this year, we unveiled watsonx running on IBM Cloud to help enterprises train, tune and deploy AI models, while scaling workloads. We also recently announced IBM and VMware are pairing VMware Cloud Foundation, Red Hat OpenShift and the IBM watsonx AI and data platform. This combination will enable enterprises to access IBM watsonx in private, on-premises Infrastructure as a Service (IaaS) environments as well as hybrid cloud with watsonx SaaS offerings on IBM Cloud. Additionally, clients can optionally use IBM Cloud Satellite to automate and simplify deployment and day two operations of their VMware Private AI with OpenShift environments.

AI can be a fundamental source of competitive advantage, helping organizations meet challenges and uncover opportunities for now and in the future. For example, CrushBank’s AI Knowledge Management platform leverages watsonx to transform IT support, helping the company to streamline help desk operations with AI by arming its IT staff with better information. According to CrushBank, this has led to improved productivity and a decrease in time to resolution by 45%, ultimately enhancing the customer experience.

We understand executing AI at scale can be a difficult process—especially when one considers the challenges with processing the data required for AI models—and are committed to continuously delivering solutions to help clients leverage AI with speed and at scale. With IBM Cloud Object Storage, we are helping clients manage data files as highly resilient and durable objects—which can help enterprises to scale AI workloads.

3. Managing security and compliance challenges across hybrid, multicloud environments


As enterprises confront new threats across their supply chain, including those potentially posed by AI, a complex web of global regulations has struggled to keep pace. In our view, an effective cybersecurity strategy, across the supply chain, requires adoption of consistent standards, workload and data protection, and governance, aided by robust technology solutions. IBM’s continued commitment to offer innovative security solutions that can help clients face contemporary challenges is the foundation for our recent progress. This includes the recent expansion of the IBM Cloud Security and Compliance Center—a suite of modernized cloud security and compliance solutions—to help enterprises mitigate risk and protect data across their hybrid, multicloud environments and workloads. The new IBM Cloud Security and Compliance Center Data Security Broker solution provides a transparent layer of data encryption with format preserving encryption and anonymization technology to protect sensitive data used in business applications and AI workloads.

Additionally, the IBM Cloud Security and Compliance Center is designed to deliver enhanced cloud security posture management (CSPM), workload protection (CWPP), and infrastructure entitlement management (CIEM) to help protect hybrid, multicloud environments and workloads. The workload protection capabilities aim to prioritize vulnerability management to support quick identification and remediation of critical vulnerabilities. In fact, we have innovated our Scanning Engine for the detection of vulnerabilities in Kubernetes, OpenShift, and Linux standalone hosts, designed to deliver enhanced performance with greater accuracy, as well as facilitate prioritization on critical issues.

To complement our existing suite of tools, we also introduced an AI ICT Guardrails profile in the IBM Cloud Security and Compliance Centerwhich provides a predefined list of infrastructure and related data controls that are required to handle AI and generative AI workloads. With these guardrails, we are working to help enterprises seamlessly manage security and compliance capabilities as they scale their AI workloads.

4. Embracing third and fourth parties while reducing risk


Through our long history of working closely with clients in highly regulated industries, we fundamentally know the challenges they face and built our cloud for regulated industries to enable organizations across financial services, government, healthcare and more to drive secured innovation. In fact, we are honored to be recognized by Gartner® in their 2023 Gartner® Magic Quadrant™ for Strategic Cloud Platform Services (SCPS). We feel that IBM’s positioning in this report is a strong acknowledgement of our strategy.

We designed our cloud for regulated industries to allow clients to host applications and workloads in the cloud with confidence, while addressing third and fourth party risk throughout their supply chain. Only ISVs and SaaS providers that are validated to be in alignment with the IBM Cloud Framework for Financial Services—an industry-defined security and a streamlined compliance controls framework—are eligible to deliver offerings on the IBM Cloud for Financial Services.

The Framework is informed by IBM’s Financial Services Cloud Council which brings together CIOs, CTOs, CISOs and Compliance and Risk Officers to drive cloud adoption for mission-critical workloads in financial services. The Council has grown to more than 160 members from over 90 financial institutions including Comerica Bank, Westpac, BNP Paribas and CaixaBank who are all working together to inform the controls that are required to operate securely with bank-sensitive data in the cloud. These controls are not IBM’s—they are the industry’s collective controls and we’ve made them available on multiple clouds (public or private) with IBM Cloud Satellite.

5. Modernizing processes as organizations embrace digital transformation


Across industries, we are seeing organizations move further on their digital transformation journeys. For example, in financial services, the payments ecosystem is an inflection point for transformation. We believe now is the time for change and IBM continues to work with its partner community to drive transformation. Temenos Payments Hub recently became the first dedicated payments solution to deliver innovative payments capabilities on the IBM Cloud for Financial Services, now the latest initiative in our long history together helping clients transform. With the Temenos Payments Hub now on IBM Cloud for Financial Services, the solution is available across IBM’s hybrid cloud infrastructure, running on Red Hat OpenShift with IBM Power and LinuxONE. Additionally, as organizations look to modernize their trade finance journeys, we have leveraged the breadth of IBM’s technology and consulting capabilities to develop a platform for the industry. We recently introduced the IBM Connected Trade Platform, designed to power the digitization of trade and supply chain financing and help organizations to transition from a fragmented to a data-driven supply chain.

Source: ibm.com

Tuesday, 19 December 2023

IBM

How NS1 ensures seamless DNS migrations

How NS1 ensures seamless DNS migrations

Migrating a mission-critical system like authoritative DNS is always going to involve some amount of risk. Whenever you’re flipping a switch from one infrastructure provider to another, the possibility of downtime is always lurking in the background.

Unfortunately, many network admins use this risk potential as a reason to continue using an authoritative DNS service that no longer adds business value. That fear of the unknown often seems worse than enduring the day-to-day indignities of the DNS platforms they currently use.

This is why IBM NS1 Connect pays so much attention to the customer migration process. We pride ourselves on our DNS expertise and our hands-on approach to customer service. Over time, we’ve developed a tried and true migration process that enables a seamless changeover without downtime.

We also use the onboarding process to optimize your authoritative DNS, so you hit the ground running. We want your business to feel comfortable enough with NS1 Connect functionality to drive the results you’re looking for: better network performance, improved reliability and reduced cost of delivery.

It’s worth mentioning that we use this process for complete changeovers and in situations where NS1 is added as a second provider. Confirming the proper division of labor as well as the validity of failover mechanisms is an important consideration when you’re using more than one authoritative DNS provider. That’s why we treat it like a de facto migration.

The NS1 migration process


Customers often ask us how long their migration will take. They want to dig into the details of how we’re going to make the transition seamless. Our primary goal is to do no harm, but close behind that we want to make the change as quick and easy as possible.

We’ve broken up our authoritative DNS migration process into six distinct stages. The timing of these stages vary quite a bit, depending on the scale and complexity of a customer’s DNS architecture and business requirements. We’ve done “emergency” migrations in as little as a day, but generally, we prefer a few weeks to allow customers time to test various functions as thoroughly as they’d like.

Step one: Kickoff and discovery


In this stage, we meet with your DNS team and scope out the migration project. We’ll ask about your business requirements and how you measure success. We’ll also map out your DNS infrastructure and ask about any business-specific quirks or custom configurations. By the end of this phase, we should be aligned on what you want to achieve and how we’ll get you there.

Step two: Build out the basics


Once the scoping part is complete, we usually progress straight to the build phase. Working closely with your team, we create your DNS records and zone files. If you’ve got existing DNS zones that you’d like to replicate, we export your zone files to NS1. If you’re adding NS1 as a secondary provider, we’ll connect it to your primary server and configure handoffs accordingly. We also walk through the NS1 Connect API and start building connections to automation platforms (like Terraform or Ansible) that you may be using. By the end of this phase, you should have the basic outlines of your DNS architecture up and running.

Step three: Advanced configuration


In this phase, we expand on the baseline from phase two by optimizing your DNS configurations. Looking at the performance metrics for your application or business, we work with your team to set up traffic steering logic, build out automated functionality, add resilience and put together frameworks for monitoring. At the end of this phase, you should have a finely tuned DNS infrastructure that fits your performance and resilience requirements.

Step four: Consistency check


In this phase, we test your configurations for accuracy and reliability. We also use this phase to update the NS1 system as needed to ensure it covers 100% of the original configuration. This step makes sure that we don’t miss anything, including updates that weren’t made in Phase 2 that may need replication. We use analytics tools to make sure that the DNS answers you’re providing are the right ones. We also double check the performance of the system using real data to ensure that it meets the expectations set out at the beginning of the project. At the end of this phase, you should have a fully configured DNS architecture that’s ready to deploy.

Step five: Migration


Now comes the fun part—turning it on! In this phase, we start gradually moving traffic onto NS1 systems, monitoring performance as we go along. It starts off small to minimize risk and ensure that any impact isn’t felt across the enterprise. Usually, we prioritize certain parts of your application workloads with reduced impact. Then gradually, we start to cut over larger portions of the enterprise until you’re fully up and running on NS1.

Step six: Optimization


The job of fine tuning a network never truly ends. You learn a lot in the onboarding and migration steps, but over time your metrics for success often change. Applications come and applications go. With every change, your DNS will have to adapt. That’s why we stay in touch long after a migration is over. We want to make sure that the authoritative DNS we build for you meets your long-term needs.

Taking the first step


Migrating your authoritative DNS doesn’t have to be difficult. The risk is manageable. It doesn’t have to consume a lot of time or resources. The result—an authoritative DNS that meets your business needs and delivers great user experiences—is clearly worth it.

The key is to have someone who’s been there before, who knows what to look for. That’s where NS1’s DNS experts add concrete value. We’ve migrated hundreds of customers—large and small, complex and straightforward, from every other authoritative DNS solution on the market. That’s why our customers love us. We focus on what we know best—the DNS infrastructure your business relies on every day.

It doesn’t matter how antiquated, proprietary, or complex your authoritative DNS setup is today. We work just as well with customers who have self-hosted with BIND for decades as with customers who use cloud providers for their DNS. With all the customers we’ve migrated, not much will truly surprise us at this point.

Source: ibm.com

Saturday, 16 December 2023

Five open-source AI tools to know

Five open-source AI tools to know

Open-source artificial intelligence (AI) refers to AI technologies where the source code is freely available for anyone to use, modify and distribute. When AI algorithms, pre-trained models, and data sets are available for public use and experimentation, creative AI applications emerge as a community of volunteer enthusiasts builds upon existing work and accelerates the development of practical AI solutions. As a result, these technologies quite often lead to the best tools to handle complex challenges across many enterprise use cases.

Open-source AI projects and libraries, freely available on platforms like GitHub, fuel digital innovation in industries like healthcare, finance and education. Readily available frameworks and tools empower developers by saving time and allowing them to focus on creating bespoke solutions to meet specific project requirements. Leveraging existing libraries and tools, small teams of developers can build valuable applications for diverse platforms like Microsoft Windows, Linux, iOS and Android.

The diversity and accessibility of open-source AI allow for a broad set of beneficial use cases, like real-time fraud protection, medical image analysis, personalized recommendations and customized learning. This availability makes open-source projects and AI models popular with developers, researchers and organizations. By using open-source AI, organizations effectively gain access to a large, diverse community of developers who constantly contribute to the ongoing development and improvement of AI tools. This collaborative environment fosters transparency and continuous improvement, leading to feature-rich, reliable and modular tools. Additionally, the vendor neutrality of open-source AI ensures organizations aren’t tied to a specific vendor.

While open-source AI offers enticing possibilities, its free accessibility poses risks that organizations must navigate carefully. Delving into custom AI development without well-defined goals and objectives can lead to misaligned results, wasted resources and project failure. Further, biased algorithms can produce unusable outcomes and perpetuate harmful assumptions. The readily available nature of open-source AI also raises security concerns; malicious actors could leverage the same tools to manipulate outcomes or create harmful content.

Biased training data can lead to discriminatory outcomes, while data drift can render models ineffective and labeling errors can lead to unreliable models. Enterprises may expose their stakeholders to risk when they use technologies that they didn’t build in-house. These issues highlight the need for careful consideration and responsible implementation of open-source AI.

As of this writing, tech giants are divided in opinion on the topic (this link resides outside of IBM). Through the AI Alliance, companies like Meta and IBM advocate for open-source AI, emphasizing open scientific exchange and innovation. In contrast, Google, Microsoft and OpenAI favor a closed approach, citing concerns about the safety and misuse of AI. Governments like the U.S. and EU are exploring ways to balance innovation with security and ethical concerns.

The transformative power of open-source AI


Despite the risks, open-source AI continues to grow in popularity. Many developers are choosing open-source AI frameworks over proprietary APIs and software. According to the 2023 State of Open Source report (this link resides outside of IBM), a notable 80% of survey respondents reported increased use of open-source software over the past year, with 41% indicating a “significant” increase.

As open-source AI becomes more widely used among developers and researchers, primarily due to investments by tech giants, organizations stand to reap the rewards and gain access to transformative AI technologies.

In healthcare, IBM Watson Health uses TensorFlow for medical image analysis, enhanced diagnostic procedures and more personalized medicine. J.P. Morgan’s Athena uses Python-based open-source AI to innovate risk management. Amazon integrates open-source AI to refine its recommendation systems, streamline warehouse operations and enhance Alexa AI. Similarly, online educational platforms like Coursera and edX use open-source AI to personalize learning experiences, tailor content recommendations and automate grading systems.

Not to mention the numerous applications and media services, including companies like Netflix and Spotify, that merge open-source AI with proprietary solutions, employing machine learning libraries like TensorFlow or PyTorch to enhance recommendations and boost performance.

Five open-source AI tools to know


The following open-source AI frameworks offer innovation, foster collaboration and provide learning opportunities across various disciplines. They are more than tools; each entrusts users, from the novice to the expert, with the ability to harness the massive potential of AI.

  • TensorFlow is a flexible, extensible learning framework that supports programming languages like Python and Javascript. TensorFlow allows programmers to construct and deploy machine learning models across various platforms and devices. Its robust community support and extensive library of pre-built models and tools streamline the development process, making it easier for beginners and experienced practitioners to innovate and experiment with AI.
  • PyTorch is an open-source AI framework offering an intuitive interface that enables easier debugging and a more flexible approach to building deep learning models. Its strong integration with Python libraries and support for GPU acceleration ensures efficient model training and experimentation. It is a popular choice among researchers and developers for rapid software development prototyping and AI and deep learning research.
  • Keras, an open-source neural network library written in Python, is known for its user-friendliness and modularity, allowing for easy and fast prototyping of deep learning models. It stands out for its high-level API, which is intuitive for beginners while remaining flexible and powerful for advanced users, making it a popular choice for educational purposes and complex deep-learning tasks.
  • Scikit-learn is a powerful open-source Python library for machine learning and predictive data analysis. Providing scalable supervised and unsupervised learning algorithms, it has been instrumental in the AI systems of major companies like J.P. Morgan and Spotify. Its simple setup, reusable components and large, active community make it accessible and efficient for data mining and analysis across various contexts.
  • OpenCV is a library of programming functions with comprehensive computer vision capabilities, real-time performance, large community and platform compatibility, making it an ideal choice for organizations seeking to automate tasks, analyze visual data and build innovative solutions. Its scalability allows it to grow with organizational needs, making it suitable for startups and large enterprises.

The surging popularity of open-source AI tools, from frameworks like TensorFlow, Apache, and PyTorch; to community platforms like Hugging Face, reflects a growing recognition that open-source collaboration is the future of AI development. Participation in these communities and collaboration on the tools helps organizations get access to the best tools and talent.

The future of open-source AI


Open-source AI reimagines how enterprise organizations scale and transform. As the technology’s influence extends across industries, inspiring widespread adoption and a deeper application of AI capabilities, here’s what organizations can look forward to as open-source AI continues to drive innovation.

Advancements in natural language processing (NLP), tools like Hugging Face Transformers and large language models (LLMs) and computer vision libraries like OpenCV will unlock more complex and nuanced applications, like more sophisticated chatbots, advanced image recognition systems and even robotics and automation technologies.

Projects like Open Assistant, the open-source chat-based AI assistant, and GPT Engineer, a generative AI tool that allows users to create applications from text prompts, foreshadow the future of ubiquitous, highly personalized AI assistants capable of handling intricate tasks. This shift towards interactive, user-friendly AI solutions suggests a deeper integration of AI into our daily lives.

While open-source AI is an exciting technological development with many future applications, currently it requires careful navigation and a solid partnership for an enterprise to adopt AI solutions successfully. Open-source models often fall short of state-of-the-art models and require substantial fine-tuning to reach the level of effectiveness, trust and safety needed for enterprise use. While open-source AI offers accessibility, organizations still require significant investments in compute resources, data infrastructure, networking, security, software tools, and expertise to utilize them effectively.

Many organizations need bespoke AI solutions that current open-source AI tools and frameworks can only provide a shadow of. While evaluating open-source AIs’ impact on organizations worldwide, consider how your business can take advantage; explore how IBM offers the experience and expertise needed to build and deploy a reliable, enterprise-grade AI solution.

Source: ibm.com

Thursday, 14 December 2023

Promote resilience and responsible emissions management with the IBM Maximo Application Suite

Promote resilience and responsible emissions management with the IBM Maximo Application Suite

Embracing responsible emissions management can transform how organizations impact the health and profitability of their assets. This opportunity is undeniable. An IBM CEO study, based on interviews with 3,000 CEOs worldwide, reveals that CEOs who successfully integrate sustainability and digital transformation report a higher average operating margin than their peers. Additionally, more than 80% of the interviewed CEOs say sustainability investments will drive better business results in the next five years. This study underscores the transformative potential of aligning businesses with sustainable practices.

As leaders in asset management operations, you must help your company deliver on the bottom line. Let’s explore how the IBM® Maximo® Application Suite (MAS) can help you optimize the efficiency of your assets through operational emissions management.

Unlocking the benefits of emissions management


Emissions management is not only about tracking greenhouse gases; it also involves controlling and overseeing a wide range of emissions released into the atmosphere during industrial processes. Emissions can be intentional, such as exhaust gases from a power plant, or they can be unintentional, like pollutants from manufacturing. These processes result in byproducts, including leaks, effluents, waste oil and hazardous waste.

To manage these byproducts effectively, focus on optimizing your assets and identifying emerging issues early on. Well-maintained assets produce fewer byproducts and last longer. Additionally, minimizing waste and hazardous materials promotes a safer and cleaner environment.

Besides environmental responsibility, emissions management boosts the bottom line through operational efficiency, regulatory compliance, safer working environments and an enhanced corporate image. Let’s explore each of these aspects in more detail. 

Strategic planning and operational efficiency

Strategic maintenance planning drives significant cost savings. Efficient assets have longer lifespans, improved performance and help to ensure uninterrupted production. For example, Sund & Baelt automated their inspection work to monitor and manage its critical infrastructures to help them reduce time and costs. With a better understanding of asset health and the risks to address with proactive maintenance, Sund & Baelt estimates that they can increase the lifetime of bridges, tunnels and other assets while decreasing their total carbon footprint. Establishing common sustainability goals also encourages collaboration among typically siloed departments, like operations, safety, and maintenance. Fostering this collaboration better positions you to lead future asset management programs and achieving these objectives enhances your organization’s operational health.

Compliance and fines

Regulatory bodies like the US Environmental Protection Agency (EPA) set stringent standards for companies to meet. Emissions management is pivotal in enabling compliance, as it helps organizations trace and resolve issues. This approach fosters greater accountability, driving a culture of responsibility and transparency within the company. Non-compliance leads to rapid accumulation of fines, emphasizing the importance of adhering to regulations. For example, to help protect the stratospheric ozone and reduce the risks of climate change, the EPA has levied millions of dollars of fines to companies that mismanage emissions under the Clean Air Act. In 2022, the Inflation Reduction Act amended the Clean Air Act and introduced new fines for methane leaks starting at USD 900 per metric ton of methane emissions in 2024, rising to USD 1,500 by 2026. According to the Congressional Research Service, this affects over 2,000 facilities in the petroleum and natural gas industry, and is expected to see fines of USD 1.1 billion levied in 2026, rising to USD 1.8 billion in 2028. This would result in an average facility facing annual fines totaling USD 800,000.

Operational health and working environment

Emissions management efforts can help establish a safe working environment across your organization. Reducing exposure to hazardous substances promotes better air quality reducing health risks and positively impacting the well-being of workers. Complying with occupational health and safety standards creates a workplace that prioritizes employee safety and meets regulatory requirements. Managing and monitoring emissions further reduce the likelihood of accidents and incidents. Additionally, emissions management includes developing strategies for handling emergencies related to hazardous materials. As stated by VPI, “There are always inherent dangers, but you can make them safe places to work by employing a robust and efficient maintenance strategy and safety systems of work.” Having a robust operations maintenance strategy in place enhances the organization’s ability to respond effectively to unexpected incidents, safeguarding employees.

An efficient, sustainable, and responsible enterprise reaps the benefits of a healthy culture. This approach attracts top talent and becomes appealing to investors. With more buyers favoring sustainable and responsible vendors, you’ll see an increase in sales to this market.

Enterprises demonstrating clear progress in sustainability commitments often receive more support from governing bodies. The Inflation Reduction Act, for example, offers significant tax credits for companies that can capture, and store carbon dioxide emitted from industrial operations. This external validation reinforces the need for effective emissions management. It emphasizes the multifaceted benefits to businesses, society and the environment. 

In short, emissions management involves taking charge, reducing waste and making your business more efficient, performant and healthy.

Better manage emissions with MAS


MAS is a comprehensive suite of applications designed to improve asset health and reliability. How can MAS help you better manage emissions?

Driving a reliability and sustainability culture starts with designing the optimal strategy for your asset operations, placing emphasis on the reliability and sustainability of your critical assets. With Maximo Reliability Strategies, cross-functional teams can accelerate failure mode and effect analysis (FMEA) for assets prone to emission issues. Asset operations teams can then apply reliability-centered maintenance strategies to ensure your assets are managed to the highest standards of reliability.

Key components in MAS, including Maximo® Health, Maximo® Manage and Health, Safety and Environmental Management (HSE), play a pivotal role in emissions management. These applications provide operational data capture, governance, safety measures and incident management. Specifically, the HSE component offers occupational health incident tracking, process safety, permits, consents, identification of environmental emissions, ISO14000 requirements compliance and investigations. 

Ensuring the optimal health of your assets needs a team with a high-performance culture empowered with the tools to realize their vision. To enhance your emissions management strategy, apply Asset Performance Management (APM) within MAS by using components like Maximo® Monitor and Maximo® Predict, which are powered by industry leading algorithms and AI through IBM watsonx services. With APM, you can take a proactive and prescriptive approach to emissions management, leading to even greater efficiency gains and asset health. 

Envizi and MAS: Better together


The IBM Envizi ESG Suite offers key capabilities for a comprehensive emissions management solution, complementing the operational excellence that MAS enables.

Envizi delivers top-down management reporting, while MAS offers bottom-up operational asset management. Envizi provides enterprise and site-level reporting, whereas MAS adds asset-level reporting for full traceability and accountability. With both solutions, you get visibility across the problem spectrum, from the enterprise to the asset level, whether through Envizi or MAS. Furthermore, you can address issues surfaced by Envizi or MAS directly at the asset level or even prevent these issues from occurring in the first place. 

With these two solutions, asset management leaders can accurately gauge their organizations’ performance and the operationalization of their sustainability goals.

Source: ibm.com