Wednesday, 31 July 2024

Step-by-step guide: Generative AI for your business

Step-by-step guide: Generative AI for your business

Generative artificial intelligence (gen AI) is transforming the business world by creating new opportunities for innovation, productivity and efficiency. This guide offers a clear roadmap for businesses to begin their gen AI journey. It provides practical insights accessible to all levels of technical expertise, while also outlining the roles of key stakeholders throughout the AI adoption process.

1. Establish generative AI goals for your business


Establishing clear objectives is crucial for the success of your gen AI initiative.

Identify specific business challenges that gen AI could address

When establishing Generative AI goals, start by examining your organization’s overarching strategic objectives. Whether it’s improving customer experience, increasing operational efficiency, or driving innovation, your AI initiatives should directly support these broader business aims.

Identify transformative opportunities

Look beyond incremental improvements and focus on how Generative AI can fundamentally transform your business processes or offerings. This might involve reimagining product development cycles, creating new revenue streams, or revolutionizing decision-making processes. For example, a media company might set a goal to use Generative AI to create personalized content at scale, potentially opening up new markets or audience segments.

Involve business leaders to outline expected outcomes and success metrics

Establish clear, quantifiable metrics to gauge the success of your Generative AI initiatives. These could include financial indicators like revenue growth or cost savings, operational metrics such as productivity improvements or time saved, or customer-centric measures like satisfaction scores or engagement rates.

2. Define your gen AI use case


With a clear picture of the business problem and desired outcomes, it’s necessary to delve into the details to boil down the business problem into a use case.

Technical feasibility assessment

Conduct a technical feasibility assessment to evaluate the complexity of integrating generative AI into existing systems. This includes determining whether custom model development is necessary or if pre-trained models can be utilized, and considering the computational requirements for different use cases.

Prioritize the right use case

Develop a scoring matrix to weigh factors such as potential revenue impact, cost reduction opportunities, improvement in key business metrics, technical complexity, resource requirements, and time to implementation.

Design a proof of concept (PoC)

Once a use case is chosen, outline a technical proof of concept that includes data preprocessing requirements, model selection criteria, integration points with existing systems, and performance metrics and evaluation criteria.

3. Involve stakeholders early


Early engagement of key stakeholders is vital for aligning your gen AI initiative with organizational needs and ensuring broad support. Most teams should include at least four types of team members.

  • Business Manager: Involve experts from the business units that will be impacted by the selected use cases. They will help align the pilot with their strategic goals and identify any change management and process reengineering required to successfully run the pilot.
  • AI Developer / Software engineers: Provide user-interface, front-end application and scalability support.  Organizations in which AI developers or software engineers are involved in the stage of developing AI use cases are much more likely to reach mature levels of AI implementation.
  • Data Scientists and AI experts:  Historically we have seen Data Scientists build and choose traditional ML models for their use cases. We now see their role evolving into developing foundation models for gen AI.  Data Scientists will typically help with training, validating, and maintaining foundation models that are optimized for data tasks.
  • Data Engineer:  A data engineer sets the foundation of building any generating AI app by preparing, cleaning and validating data required to train and deploy AI models. They design data pipelines that integrate different datasets to ensure the quality, reliability, and scalability needed for AI applications.

4. Assess your data landscape


A thorough evaluation of your data assets is essential for successful gen AI implementation.

Take inventory and evaluate existing data sources relevant to your gen AI goals

Data is indeed the foundation of generative AI, and a comprehensive inventory is crucial. Start by identifying all potential data sources across your organization, including structured databases. Assess each source for its relevance to your specific gen AI goals. For example, if you’re developing a customer service chatbot, you’ll want to focus on customer interaction logs, product information databases, and FAQs

Use IBM® watsonx.data™ to centralize and prepare your data for gen AI workloads

Tools such as IBM watsonx.data can be invaluable in centralizing and preparing your data for gen AI workloads. For instance, watsonx.data offers a single point of entry to access all your data across cloud and on-premises environments. This unified access simplifies data management and integration tasks. By using this centralized approach, watsonx.data streamlines the process of preparing and validating data for AI models. As a result of this, your gen AI initiatives are built on a solid foundation of trusted, governed data.

Bring in data engineers to assess data quality and set up data preparation processes

This is when your data engineers use their expertise to evaluate data quality and establish robust data preparation processes. Remember, the quality of your data directly impacts the performance of your gen AI models.

5. Select foundation model for your use case


Choosing the right AI model is a critical decision that shapes your project’s success.

Choose the appropriate model type for your use case

Data scientists play a crucial role in selecting the right foundation model for your specific use case. They evaluate factors like model performance, size, and specialization to find the best fit. IBM watsonx.ai offers a foundation model library that simplifies this process, providing a range of pre-trained models optimized for different tasks. This library allows data scientists to quickly experiment with various models, accelerating the selection process and ensuring the chosen model aligns with the project’s requirements.

Evaluate pretrained models in watsonx.ai, such as IBM Granite

These models are trained on trusted enterprise data from sources such as the internet, academia, code, legal and finance, making them ideal for a wide range of business applications. Consider the tradeoffs between pretrained models, such as IBM Granite available in platforms such as watsonx.ai and custom-built options.

Involve developers to plan model integration into existing systems and workflows
Engage your AI developers early to plan how the chosen model integrates with your existing systems and workflows, helping to ensure a smooth adoption process.

6. Train and validate the model


Training and validation are crucial steps in refining your gen AI model’s performance.

Monitor training progress, adjust parameters and evaluate model performance

Use platforms such as watsonx.ai for efficient training of your model. Throughout the process, closely monitor progress and adjust parameters to optimize performance.

Conduct thorough testing to assess model behavior and compliance

Rigorous testing is crucial. Governance toolkits such as watsonx.governance can help assess your model’s behavior and help ensure compliance with relevant regulations and ethical guidelines.

Use watsonx.ai to train the model on your prepared data set

This step is iterative, often requiring multiple rounds of refinement to achieve the wanted results.

7. Deploy the model


Deploying your gen AI model marks the transition from development to real-world application.

Integrate the trained model into your production environment with IT and developers

Developers take the lead in integrating models into existing business applications. They focus on creating APIs or interfaces that allow seamless communication between the foundation model and the application. Developers also handle aspects like data preprocessing, output formatting, and scalability; ensuring the model’s responses align with business logic and user experience requirements.

Establish feedback loops with users and your technical team for continuous improvement

It is essential to establish clear feedback loops with users and your technical team. This ongoing communication is vital for identifying issues, gathering insights and driving continuous improvement of your gen AI solution.

8. Scale and evolve


As your gen AI project matures, it’s time to expand its impact and capabilities.

Expand successful AI workloads to other areas of your business

As your initial gen AI project proves its value, look for opportunities to apply it across your organization.

Explore advanced features in watsonx.ai for more complex use cases

This might involve adapting the model for similar use cases or exploring more advanced features in platforms such as watsonx.ai to tackle complex challenges.

Maintain strong governance practices as you scale gen AI capabilities

As you scale, it’s crucial to maintain strong governance practices. Tools such as watsonx.governance can help ensure that your expanding gen AI capabilities remain ethical, compliant and aligned with your business objectives.

Embark on your gen AI transformation


Adopting generative AI is more than just implementing new technology, it’s a transformative journey that can reshape your business landscape. This guide has laid the foundation for using gen AI to drive innovation and secure competitive advantages. As you take your next steps, remember to:

  • Prioritize ethical practices in AI development and deployment
  • Foster a culture of continuous innovation and learning
  • Stay adaptable as gen AI technologies and best practices evolve

By embracing these principles, you’ll be well positioned to unlock the full potential of generative AI in your business.

Unleash the power of gen AI in your business today


Discover how the IBM watsonx platform can accelerate your gen AI goals. From data preparation with watsonx.data to model development with watsonx.ai and responsible AI practices with watsonx.governance, we have the tools to support your journey every step of the way.

Source: ibm.com

Saturday, 27 July 2024

Revolutionizing community access to social services: IBM and Microsoft’s collaborative approach

Revolutionizing community access to social services: IBM and Microsoft’s collaborative approach

In an era when technological advancements and economic growth are often hailed as measures of success, it is essential to pause and reflect on the underlying societal challenges that these advancements often overlook. And to consider how they can be used to genuinely improve the human condition.

IBM Consulting and Microsoft together with government leaders, are answering that call, partnering to develop a platform to bridge the division and enhance the delivery of social services support to communities in need.

Our shared purpose


Communities face a myriad of pressing societal challenges daily, from homelessness and juvenile justice to violence, mental health and food insecurity. In response, many government organizations are adopting transformative strategies with the goal of creating a society where all individuals have access to the necessary support and resources to flourish.

Take, for instance, government leaders who are adopting a “Care First” strategy. This approach is about redirecting thousands from the criminal justice system into supportive programs tailored to their “re-entry into society” needs. These programs help communities with housing, transportation, access to substance use treatment, and other essential services.

Other innovative leaders are dedicated to preventing violence, enhancing maternal health and equipping transitional age youth for success. A broader segment of leaders are embracing a whole-person care approach, focusing on the community at large rather than specific groups, thereby integrating social services across health, education and other vital sectors.

Introducing IBM Connect360


While many government organizations across the world have a wealth of different services and programs available to them. However, they are not able to bring the power of these systems to the people and communities that really need them. In many cases, the delivery models, technology (systems), and underlying data have been developed in silos and users need to work with each system and services separately.

IBM Connect360 facilitates integrated social services delivery and transforms data into actionable information and promotes cross-agency collaboration. This solution is focused on achieving five key goals:

  • Enable collaboration by creating an electronic information exchange system
  • Improve citizen access to services and resources through shared information
  • Empower the citizen by permitting active participation in service decisions and delivery
  • Strengthen decision‐making ability through data integration and business analytics
  • Increase the region’s connection to community data through interoperability

IBM Connect360 is a platform that seamlessly integrates data from various siloed social services agency systems. This capability is designed to transform and align disparate data sources with the HL7 FHIR (Fast Healthcare Interoperability Resources) and HSDS (Human Services Data Specification) standards. This is so that information is standardized, protected, and easily accessible.

By adhering to the HL7 FHIR specification, IBM Connect360 also facilitates interoperability of health-related data with a wide range of healthcare systems to drive continuity and coordination of care for individuals in need.

This transformation not only makes the data readily available within IBM Connect360 but also enables seamless interoperability with other applications. As a result, service providers can exchange and update information to provide coordinated and effective assistance for those who rely on these crucial services.

IBM Connect360, hosted on Microsoft Azure, provides the level of isolation, security, performance, scale and reliability, required to support sensitive workloads across a broad spectrum of unique requirements. As we look to the future, IBM® and Microsoft’s investment in AI and their commitment to advance responsible, secured and trustworthy AI sets the baseline for future enhancements of IBM Connect360.

Transforming lives with IBM Connect360 and Microsoft Azure


Social services encompass a broad spectrum of programs from multiple departments, such as health, behavioral health, social services, housing, justice and more, all aimed at supporting individuals with complex needs.

Meeting these needs depends on interdepartmental collaboration, which is essential for improving client outcomes. Another key factor in achieving better results is the participation of Contracted Service Providers (CSPs) and Community-Based Organizations (CBOs) through the departments’ network. Beyond directly offering services, departments also have the responsibility for providing a central resource repository that their partners can use to deliver services.

The operational model that is being used is supported by IBM Connect360. In this model, the government agency provides the foundational systems and APIs, which provide access to the core systems in near real-time. This allows various business applications to interact with each other across different departments by using these APIs.

Each entity involved in this model focuses on its core competencies: The agency manages data, establishes business rules, ensures compliance and evaluates performance, while CSPs and CBOs can create service-oriented applications that are closer to the client. This model facilitates the swift introduction of new public assistance and healthcare programs by making efficient use of the existing agency resources. The ecosystem works together during the launch process to ensure that services are delivered to clients promptly, without any delays.

“At IBM, we understand the importance of effective communication and collaboration across government agencies,” said Cristina Caballé Fuguet, Vice President, Global Public Sector at IBM Consulting. “With IBM Connect360, government agencies can connect with citizens, share information, and gather feedback, all in a protected and scalable environment. We’re proud to see how IBM Connect360 with Microsoft Azure are helping governments around the world better serve their citizens and build stronger, more resilient communities.”

Case Study: How is IBM Connect 360 helping transform one citizen’s life?


Let’s consider the fictional story of Michael, a veteran who was finding the transition to civilian life challenging. Dealing with challenges including PTSD, mental health issues and substance abuse, he found himself trapped in a vicious cycle of homelessness. How did IBM Connect 360 help him?

With a care coordinator’s help, Michael set up his account on IBM Connect360. He provided information about his circumstances, IBM Connect360 assessed his needs and provided tailored recommendations. These recommendations were for services such as interim housing, substance use treatment, mental health support, transportation, skills training and ultimately helped him find a job.

With each step, Michael grew more confident and independent. He used the recommended services diligently, found solace in his supportive community of care providers, and slowly rebuilt his life, piece by piece. The technology solution was not just a guide but a constant companion in his journey to stability.

This is not just about one man’s path to stability; it shows that the right tools, combined with a supportive network, can bring about real, positive change in a person’s life. Michael’s example is a testament to the power of compassionate intervention and the potential applications of technology in social support systems. With the right tools and support, transformation is always within reach, and a brighter future is not just a dream but a possible reality. When IBM Consulting, Microsoft, Governments and Communities join forces these outcomes can happen at scale.

Experience the transformative power firsthand


IBM Connect360 along with IBM Community Health user interface and Microsoft Azure, is a powerful solution that has the potential to bring about real, positive change in people’s lives. This comprehensive and open platform is designed to support all stages of service delivery. From understanding individual needs, to locating and connecting with the service providers that can support them and effectively measure outcomes and quality of care.

“IBM Connect360 ensures that every aspect of community service delivery is enhanced, fostering a more connected, efficient and impactful system,” said Angela Heise, Corporate Vice President, Worldwide Public Sector at Microsoft. “We are looking forward to continuing our strategic partnership with IBM Consulting and take the solution to the next level.”

A distinctive feature of this platform is its versatility in catering to a wide range of stakeholders, community members, service navigators, care coordinators, service providers and government leaders alike, will find immense value in its features.

We recommend you experience the transformative power of this solution firsthand. Reach out to your IBM Consulting and Microsoft representatives to schedule a personalized demo and witness how this solution can be tailored to meet your unique needs and requirements.

Source: ibm.com

Thursday, 25 July 2024

Optimizing data flexibility and performance with hybrid cloud

Optimizing data flexibility and performance with hybrid cloud

As the global data storage market is set to more than triple by 2032, businesses face increasing challenges in managing their growing data. This shift to hybrid cloud solutions is transforming data management, enhancing flexibility and boosting performance across organizations.

By focusing on five key aspects of cloud adoption for optimizing data management—from evolving data strategies to ensuring compliance—businesses can create adaptable, high performing data ecosystems that are primed for AI innovation and future growth.

1. The evolution of data management strategies


Data management is undergoing a significant transformation, especially with the arrival of generative AI. Organizations are increasingly adopting hybrid cloud solutions that blend the strengths of private and public clouds, particularly beneficial in data-intensive sectors and companies embarking on AI strategy to fuel growth. 

A McKinsey & Company study reveals that companies aim to have 60% of their systems in the cloud by 2025, underscoring the importance of flexible cloud strategies. Hybrid cloud solutions address this trend by offering open architectures, combining high performance with scalability. For technical professionals, this shift means to work with systems that can adapt to changing needs without compromising on performance or security. 

2. Seamless deployment and workload portability


One of the key advantages of hybrid cloud solutions is the ability to deploy across any cloud or on-premises environment in minutes. This flexibility is further enhanced by workload portability through advanced technologies like Red Hat® OpenShift®.  

This capability allows organizations to align their infrastructure with both multicloud and hybrid cloud data strategies, ensuring that workloads can be moved or scaled as needed without being locked into a single environment. This adaptability is crucial for enterprises dealing with varying compliance requirements and evolving business needs. 

3. Enhancing AI and analytics with unified data access


 Hybrid cloud architectures are proving instrumental in advancing AI and analytics capabilities. A 2023 Gartner survey reveals that “two out of three enterprises use hybrid cloud to power their AI initiatives”, underscoring its critical role in modern data strategies. By using open formats, these solutions provide unified data access, allowing seamless sharing of data across an organization without the need for extensive migration or restructuring. 

Furthermore, advanced solutions like IBM watsonx.data™ integrate vector database like Milvus, an open-source solution that enables efficient storage and retrieval of high-dimensional vectors. This integration is crucial for AI and machine learning tasks, particularly in fields like natural learning processing and computer vision.  By providing access to a wider pool of trusted data, it enhances the relevance and precision of AI models, accelerating innovation in these areas. 

For data scientists and engineers, these features translate to more efficient data preparation for AI models and applications, leading to improved accuracy and relevance in AI-driven insights and predictions. 

4. Optimizing performance with fit-for-purpose query engines


In the realm of data management, the diverse nature of data workloads demands a flexible approach to query processing. With watsonx.data, multiple fit-for-purpose open query engines are offered such as Presto, Presto C++ and Spark, along with integration capabilities for data warehouse engines like Db2® and Netezza®. This flexibility allows data teams to choose the optimal tool for each task, enhancing both performance and cost-effectiveness. 

For instance, Presto C++ can be used for high-performance, low-latency queries on large datasets, while Spark excels at complex, distributed data processing tasks. The integration with established data warehouse engines ensures compatibility with existing systems and workflows. 

This flexibility is especially valuable when dealing with diverse data types and volumes in modern businesses. By allowing organizations to optimize their data workloads, watsonx.data addresses the challenges of rapidly propagating data across various environments. 

5. Compliance and data governance in a hybrid world


With increasingly strict data regulations, hybrid cloud architectures offer significant advantages in maintaining compliance and robust data governance. A report by FINRA (Financial Industry Regulatory Authority) demonstrates that hybrid cloud solutions can help firms manage cybersecurity, data governance and business continuity more effectively than by using multiple separate cloud services. 

 Unlike pure multicloud setups, which can complicate compliance efforts across different providers, hybrid cloud allows organizations to keep sensitive data on premises or in private clouds while using public cloud resources for less sensitive workloads. IBM watsonx.data enhances this approach with built-in data governance features, such as having a single point of entry and robust access control. This approach supports varied deployment needs and restrictions, making it easier to implement consistent governance policies and meet industry-specific regulatory requirements compromise on security. 

Embracing hybrid cloud for future-ready data management


The adoption of hybrid cloud solutions marks a significant shift in enterprise data management. By offering a balance of flexibility, performance and control, solutions like IBM watsonx.data are enabling businesses to build more resilient, efficient and innovative data ecosystems. 

As data management continues to evolve, using hybrid cloud strategies will be crucial in shaping the future of enterprise data and analytics. With watsonx.data, organizations can confidently navigate this change, using advanced features to unlock the full potential of their data across hybrid environments and be future ready to embrace AI. 

Source: ibm.com

Saturday, 20 July 2024

10 tasks I wish AI could perform for financial planning and analysis professionals

10 tasks I wish AI could perform for financial planning and analysis professionals

It’s no secret that artificial intelligence (AI) transforms the way we work in financial planning and analysis (FP&A). It is already happening to a degree, but we could easily dream of many more things that AI could do for us.

Most FP&A professionals are consumed with manual work that detracts from their ability to add value to their work. This often leaves chief financial officers and business leaders frustrated with the return on investment from their FP&A team. However, AI can help FP&A professionals elevate the work they do.

Developments in AI have accelerated tremendously in the last few years, and FP&A professionals might not even know what is possible. It’s time to expand our thinking and consider how we could maximize the potential uses of AI.

As I dream up more ways that AI could help us, I have focused on practical tasks that FP&A professionals perform today. I also considered AI-driven workflows that are realistic to implement within the next year.

10 FP&A tasks for AI to perform


  1. Advanced financial forecasting: Enables continuous updates of forecasts in real time based on the latest data. Automatically generates multiple financial scenarios and simulates their impacts under different conditions. Uses advanced algorithms to predict revenue, expenses and cash flows with high accuracy.
  2. Automated reporting and visualization: Automatically generates and updates reports and dashboards by pulling data from multiple sources in real time. Provides contextual explanations and insights within reports to highlight key drivers and anomalies. Enables user-defined metrics and visualizations tailored to specific business needs.
  3. Natural language interaction: Enables users to interact with financial systems that use natural language queries and commands, allowing seamless data retrieval and analysis. Provides voice-based interfaces for hands-free operation and instant insights. Facilitates natural language generation to convert complex financial data into easily understandable narratives and summaries.
  4. Intelligent budgeting and planning: Adjusts budgets dynamically based on real-time performance and external factors. Automatically identifies and analyzes variances between actuals and budgets, providing explanations for deviations. Offers strategic recommendations based on financial data trends and projections.
  5. Advanced risk management: Uses AI-driven risk models to identify potential market, credit and operational risks. Develops early warning systems that alert to potential financial issues or deviations from planned performance. Helps ensure compliance with financial regulations through automated monitoring and reporting.
  6. Anomaly detection in forecasts: Improves forecasting accuracy by using advanced machine learning models that incorporate both historical data and real-time inputs. Automatically detects anomalies in financial data, providing alerts for unusual patterns or deviations from expected behavior. Offers detailed explanations and potential causes for detected anomalies to guide corrective actions.
  7. Collaborative financial planning: Facilitates collaboration among FP&A teams and other departments through shared platforms and real-time data access. Enables natural language interactions with financial models and data. Implements AI-driven assistants to answer queries, perform tasks and support decision-making processes.
  8. Continuous learning and improvement: Develops machine learning models that continuously learn from new data and improve over time. Incorporates feedback mechanisms to refine forecasts and analyses based on actual outcomes. Captures historical data and insights for future decision-making.
  9. Strategic scenario planning: Analyzes market trends and competitive positioning to support strategic planning. Evaluates potential investments and their financial impacts by using AI-driven analysis. Optimizes asset and project portfolios based on AI-driven recommendations.
  10. Financial model explanations: Automatically generates clear, detailed explanations of financial models, including assumptions, calculations and potential impacts. Provides visualizations and scenario analyses to demonstrate how changes in inputs affect outcomes. Helps ensure transparency by enabling users to drill down into model components and understand the rationale behind projections and recommendations.

This is not a short wish list, but it should make us all excited about the future of FP&A. Today, FP&A professionals spend too much time on manual work in spreadsheets or dashboard updates. Implement these capabilities, and you’ll easily free up several days each month for value-adding work.

Drive the right strategic choices


Finally, use your newfound free time to realize the mission of FP&A to drive the right strategic choices in the company. How many companies have FP&A teams that facilitate the strategy process? I have yet to meet one.

However, with added AI capabilities, this could soon be a reality. Let’s elaborate on how some of the capabilities on the wish list can elevate our work to a strategic level.

  • Strategic scenario planning: How do you know what choices are available to make? It can easily become an endless desktop exercise that fails to produce useful insights. By using AI in analysis, you can get more done faster and challenge your thinking. This helps FP&A bring relevant choices and insights to the strategy table instead of just being a passive facilitator.
  • Advanced forecasting: How do you know whether you’re making the right strategic choice? The answer is simple: you don’t. However, you can improve the qualification of the choice. That’s where advanced forecasting comes in. By considering all available internal and external information, you can forecast the most likely outcomes of a choice. If the forecasts align with your strategic aspirations, it’s probably the right choice.
  • Collaborative planning: Many strategies fail to deliver the expected financial outcomes due to misalignment and silo-based thinking. Executing the right choices is challenging if the strategy wasn’t a collaborative effort or if its cascade was done in silos. Using collaborative planning, FP&A can facilitate cross-functional awareness about strategic progress and highlight areas needing attention.

If you’re unsure where to start, identify a concrete task today that aligns with any item on the wish list. Then, explore what tools are already available within your company to automate or augment the output using AI.

If no tools are available, you need to build the business case by aligning with your colleagues about the most pressing needs and presenting them to management.

Alternatively, you can try IBM Planning Analytics on your work for free. When these tools work for you, they can work for others too.

Don’t overthink the issue. Start implementing AI tools in your daily work today. It’s critical to use these as enablers to elevate the work we do in FP&A. Where will you start?

Source: ibm.com

Friday, 19 July 2024

Responsible AI is a competitive advantage

Responsible AI is a competitive advantage

In the era of generative AI, the promise of the technology grows daily as organizations unlock its new possibilities. However, the true measure of AI’s advancement goes beyond its technical capabilities.

It’s about how technology is harnessed to reflect collective values and create a world where innovation benefits everyone, not just a privileged few. Prioritizing trust and safety while scaling artificial intelligence (AI) with governance is paramount to realizing the full benefits of this technology.

It is becoming clear that for many companies, the ability to use responsible AI as part of their business operations is key to remaining competitive. To do that, organizations need to develop an AI strategy that enables them to harness AI responsibly. That strategy should include:

  • Establishing a framework for governing data and AI across the business.
  • Integrating AI into workflows that offer the greatest business impact.
  • Deriving a competitive advantage and differentiation.
  • Attracting talent and customers.
  • Building and retaining shareholder and investor confidence.

To help grow the opportunities that AI offers, organizations should consider adopting an open strategy. Open ecosystems foster greater AI innovation and collaboration. They require companies to compete based on how well they create and deploy AI technology, and they enable everyone to explore, test, study and deploy AI. This cultivates a broader and more diverse pool of perspectives that contribute to the development of responsible AI models.

The IBM AI Ethics Board highlights the opportunities for responsible AI


The IBM AI Ethics Board recognizes the opportunities presented by AI while also establishing safeguards to mitigate against misuse. A responsible AI strategy is at the core of this approach:

The board’s white paper, “Foundation models: Opportunities, risks and mitigations,” illustrates that foundation models show substantial improvements in their ability to tackle challenging and intricate problems. Underpinned by AI and data governance, the benefits of foundation models can be realized responsibly, including increased productivity (expanding the areas where AI can be used in an enterprise), completion of tasks requiring different data types (such as natural language, text, image and audio), and reduced expenses by applying a trained foundation model to a new task (versus training a new AI model for the task).

Foundation models are generative, providing opportunities for AI to automate routine and tedious tasks within operational workflows, freeing users to allocate more time to creative and innovative work. An interactive version of the foundation model white paper is also available through IBM watsonx™ AI risk atlas.

In recognition of the possible productivity gains offered by AI, the board’s white paper on Augmenting Human Intelligence emphasizes that the effective integration of AI into existing work practices can enable AI-assisted workers to become more efficient and accurate, contributing to a company’s competitive differentiation.

By handling routine tasks, AI can attract and retain talent by providing employees with opportunities to upskill into new and different career paths or to focuson more creative and complex tasks requiring critical thinking and subject matter expertise within their existing roles.

Earlier this year, the IBM AI Ethics Board highlighted that a human-centric approach to AI needs to advance AI’s capabilities while adopting ethical practices and addressing sustainability needs. AI creation requires vast amounts of energy and data. In 2023, IBM reported that 70.6% of its total electricity consumption came from renewable sources, including 74% of the electricity consumed by IBM data centers, which are integral to training and deploying AI models.

IBM is also focused on developing energy-efficient methods to train, tune and run AI models. IBM® Granite™ models are smaller and more efficient than larger models and therefore can have a smaller impact on the environment. As IBM infuses AI across applications, we are committed to meeting shareholders’, investors’ and other stakeholders’ growing expectations for the responsible use of AI, including considering the potential environmental impacts of AI.

AI presents an exciting opportunity to address some of society’s most pressing challenges. On this AI Appreciation Day, join the IBM AI Ethics Board in our commitment to the responsible development of this transformative technology.

Source: ibm.com

Tuesday, 16 July 2024

Global effort produces first-ever decline in harmful HCFC levels

Global effort produces first-ever decline in harmful HCFC levels

As much of the world’s nations struggle to make sufficient progress on reducing carbon emissions, new research has emerged showing that global collaboration can in fact reverse some of the harmful effects of human activity. A study published in the journal Nature Climate Change documented the first significant drop in atmospheric levels of hydrochlorofluorocarbons (HCFCs), harmful gases known to deplete the planet’s ozone layer.


The study from researchers at the University of Bristol found a 1% drop in HCFC emissions between 2021 and 2023. While the drop-off might seem small, it marks the first time ever a decline in the compound’s presence has been detected. Even better, the findings suggest that HCFC usage peaked in 2021, nearly five years ahead of schedule.

A brief history on HCFCs


HCFCs are human-made compounds containing hydrogen, chlorine, fluorine and carbon, and are commonly used in refrigerants, aerosol sprays, and packaging foam. They were used as a replacement for chlorofluorocarbons (CFCs), more commonly known as Freon.

CFCs were widely believed to be harmless—they are nontoxic, nonflammable and don’t have any unstable reactions with other common chemicals. But, in the 1970s, scientists Mario Molina and F. Sherwood Rowland managed to link the depletion of the ozone layer to the use of these chemical compounds.

That discovery was foundational to the Montreal Protocol, an international treaty signed by 198 countries seeking to phase out the use of substances that harm the ozone layer, the planet’s shield against ultraviolet radiation from the Sun. The agreement set forth a number of goals that would lead to the reduction and eventual elimination of ozone-depleting substances.

The first stage of the Montreal Protocol was the elimination of CFCs, and proved to be wildly successful. A 2022 report from the United Nations found that nearly 99% of all CFCs had been phased out. The report estimates that ditching CFCs, which are also greenhouse gases that trap heat in the Earth’s atmosphere, managed to avoid an increase of 0.5 to 1 degrees Fahrenheit to the planet’s temperature by 2100.

Promising results in the fight against ozone depletion


The success of the treaty now appears to be extending to HCFCs. The Freon replacement took off as a sort of harm-reduction strategy because it provided similar functionality as CFCs while doing less damage to the ozone. But, like CFCs before them, HCFCs are a greenhouse gas and contribute to planetary warming. The Montreal Treaty mandated a ban on this compound by 2020 for developed nations, and the latest study suggests the restrictions are working.

“The results are very encouraging. They underscore the great importance of establishing and sticking to international protocols,”  Dr. Luke Western, Marie Curie Research Fellow at the University of Bristol School of Chemistry and lead author on the paper, said in a statement. “Without the Montreal Protocol, this success would not have been possible, so it’s a resounding endorsement of multilateral commitments to combat stratospheric ozone depletion, with additional benefits in tackling human-induced climate change.”

The success of the Montreal Protocol isn’t just seen in the dwindling levels of harmful chemicals in the atmosphere, but can also be seen in the slow but steady decrease in the hole in the ozone layer. According to the UN, the ozone layer is expected to recover to 1980 levels by 2040 for most of the world. That would mark a return to health for the protective part of the stratosphere that would match levels before holes in the shield were first discovered.

As nations continue debating the best way to reduce carbon emissions and combat climate change, the Montreal Protocol offers a proof of concept for global cooperation. A concerted effort toward a common goal can make a difference.

Source: ibm.com

Saturday, 13 July 2024

IBM continues to support OpenSource AsyncAPI in breaking the boundaries of event driven architectures

IBM continues to support OpenSource AsyncAPI in breaking the boundaries of event driven architectures

IBM Event Automation’s event endpoint management capability makes it easy to describe and document your Kafka topics (event sources) according to the open source AsyncAPI Specification.

Why is this important? AsyncAPI already fuels clarity, standardization, interoperability, real-time responsiveness and beyond. Event endpoint management brings this to your ecosystem and helps you seamlessly manage the complexities of modern applications and systems.


The immense utility of Application Programming Interfaces (APIs) and API management are already widely recognized as it enables developers to collaborate effectively, which helps them to discover, access and build on existing solutions. As events are used for communication between applications, these same benefits can be delivered by formalizing event-based interfaces:

  • Describing events in a standardized way: Developers can quickly understand what they are and how to consume them
  • Event discovery: Interfaces can be added to catalogs, so they are advertised and searchable
  • Decentralized access: Self-service access with trackable use for interface owners
  • Lifecycle management: Interface versioning to ensure teams are not unexpectedly broken by changes

Becoming event driven has never been more important as customers demand immediate responsiveness and businesses need ways to quickly adapt to changing market dynamics. Thus, events need to be fully leveraged and spread across the organization in order for businesses to truly move with agility. This is where the value of event endpoint management becomes evident: event sources can be managed easily and consistently like APIs to securely reuse them across the enterprise; and then they can be discovered and utilized by any user across your teams.

One of the key benefits of event endpoint management is that it allows you to describe events in a standardized way according to the AysncAPI specification. With its intuitive UI, it makes it easy to produce a valid AsyncAPI document for any Kafka cluster or system that adheres to the Apache Kafka protocol.

Our AsycnAPI applicability is broadening in our implementation. Our latest event endpoint management release introduces the ability for client applications to write to an event source through the event gateway. This now means an application developer can produce to an event source that is published to the catalog, rather than just consume events. On top of this, we have provided controls such as schema enforcement to manage the kind of data a client can write to your topic.

Alongside providing self-service access to these event sources found in the catalog, we provide those finer grained approval controls. Access to the event sources is managed by the event gateway functionality: the event gateway handles the incoming requests from applications to access a topic, routing traffic securely between the Kafka cluster and the application.

Open innovation has rapidly become an engine of revenue growth and business performance. Organizations that embrace open innovation had a 59% higher rate of revenue growth compared to those that don’t. IBM Institute for Business Value

Since its inception, Event Endpoint Management has adopted and promoted AsyncAPI, which is the industry standard for defining asynchronous APIs. AsyncAPI version 3 was released in December last year and within a couple of weeks, IBM supported generating those v3 AsynchAPI docs in event endpoint management. Additionally, as part of giving back to the open-source community, IBM updated the open source AsyncAPI generator templates to support the latest version 3 updates.

Source: ibm.com

Thursday, 11 July 2024

IBM Rhapsody AUTOSAR extension streamlines complexity for accelerated innovation in the automotive industry

IBM Rhapsody AUTOSAR extension streamlines complexity for accelerated innovation in the automotive industry

In the automotive industry, software development has become a critical component of innovation and competitiveness. As vehicles become more advanced and complex, the demand for sophisticated software continues to rise. To address this challenge, industry leaders IBM®, Siemens, SodiusWillert and BTC joined forces to develop a groundbreaking solution that simplifies and accelerates the automotive software development process. 

The AUTOSAR extension for IBM® Rhapsody® represents a collaborative effort to seamlessly integrate the AUTOSAR standard with the IBM Rhapsody model-driven development (MDD) tool. This collaboration enables a smooth transition from system architecture to E/E systems and software. Developers can focus on creating advanced and complex applications without compromising robustness and efficiency.  

Bridging the gap to advanced automotive solutions


As the automotive industry evolves, the complexity of software development grows. Developers must manage intricate standards such as AUTOSAR while also managing the demands of model-driven development. The AUTOSAR Extension for Rhapsody addresses these challenges and provides a unified environment that bridges the gap between modeling and implementation. 

The AUTOSAR Extension for Rhapsody not only addresses the complexities of automotive software development but also provides several key benefits that streamline the development process and foster innovation. Here are the main advantages of this integrated solution: 

◉ Simplifying complexity. Automotive software development is inherently complex, requiring adherence to industry standards such as AUTOSAR while managing the intricacies of model-driven development. By extending the capabilities of Rhapsody to support AUTOSAR, developers can streamline their workflow and reduce the complexities associated with integrating disparate tools and standards. 

The AUTOSAR Extension for Rhapsody provides a unified environment where developers can seamlessly shift between modeling and implementation phases. This integration eliminates the need for manual translation of models into code, reducing the risk of errors and accelerating time-to-market for new automotive solutions. 

◉ Accelerating innovation. Innovation is key to staying ahead of the competition in today’s fast-paced automotive industry. By using the combined expertise of IBM, Siemens, SodiusWillert and BTC, the AUTOSAR Extension for Rhapsody empowers developers to unleash their creativity and develop groundbreaking ideas. 

With a simplified and accelerated development process, developers can focus their efforts on designing advanced features and functionalities that meet the demands of modern vehicles. Whether it’s autonomous driving systems, connected car technologies or advanced driver assistance systems, the AUTOSAR Extension for Rhapsody provides the foundation for building innovative solutions that redefine the driving experience. 

◉ Enabling robustness and efficiency. With the AUTOSAR Extension for Rhapsody, developers can build software applications that meet the highest standards of reliability and efficiency, without sacrificing performance or scalability. 

By using the AUTOSAR standard, developers can design modular and reusable software components that promote code consistency and maintainability. Integration with the MDD capabilities of Rhapsody also enables thorough validation and verification throughout the development lifecycle, helping software meet the stringent requirements of the automotive industry. 

The joint effort between IBM, Siemens, SodiusWillert and BTC has yielded a transformative solution for automotive software development. The AUTOSAR Extension for Rhapsody simplifies the development process, speeds up innovation and supports the creation of reliable and efficient software applications.  

As the automotive industry continues to evolve, the need for advanced software solutions will grow. By embracing the AUTOSAR Extension for Rhapsody, developers can manage the complexities of automotive software development with confidence, unleashing their creativity and driving the future of mobility forward.

Source: ibm.com

Tuesday, 9 July 2024

Why an SBOM should be at the center of your application management strategy

Why an SBOM should be at the center of your application management strategy

The concept of a Software Bill of Materials (SBOM) was originally focused on supply chain security or supply chain risk management. The idea was that if you know how all the different tools and components of your application are connected, you can minimize the risk associated with any component if it becomes compromised. SBOMs have become a staple of most security teams because they offer a quick way to trace the “blast radius” of a compromised piece of an application.

Yet the value of an SBOM goes well beyond application security. If you know how an application is put together (all the connections and dependencies that exist between components), then you can also use that perspective to improve how an application operates. Think of it as the reverse of the security use case. Instead of cutting off a compromised application component to avoid downstream impacts, you’re optimizing a component so downstream systems will benefit.

The role of SBOMs in Application Management


In this sense, SBOMs fill a critical gap in the discipline of application management. Most application teams use many different single-use tools to manage specific aspects of application operations and performance. Yet it’s easy to lose the broader strategic perspective of an application in the silos that those toolsets create. 

That loss of perspective is particularly concerning given the proliferation of application tools and the huge amount of data they create every day. All the widgets that optimize, monitor and report on applications can become so noisy that an application owner can simply drown in all that data.  All that data exists for a reason: someone thought it needed to be measured. But it’s only useful if it contributes to a broader application strategy.

An SBOM provides a more strategic view that can help application owners prioritize and analyze all the information they’re seeing from scattered toolsets and operating environments. It gives you a sense of the whole application, in all its glorious complexity and interconnectedness. That strategic view is a critical foundation for any application owner, because it places the data and dashboards created by siloed toolsets in context. It gives you a sense of what application tooling does and, more importantly, does not know.

SBOM maps of application dependencies and data flows can also point out observability gaps. Those gaps might be in operational components, which aren’t collecting the data that you need to gauge their performance. They could also be gaps between siloed data sources that require some way to provide context on how they interact.

SBOMs in action with IBM Concert


SBOMs play a key role in IBM Concert, a new application management tool which uses AI to contextualize and prioritize the information that flows through siloed application toolsets and operating environments. Uploading an SBOM is the easiest way to get started with IBM Concert, opening the door to a 360° view of your application.

IBM Concert uses SBOMs first to define the contours of an application. Associating data flows and operational elements with a particular application can be tricky, especially when you’re dealing with an application that spans on-prem and cloud environments with interconnected data flows. An SBOM draws a definitive barrier around an application, so IBM Concert can focus on the data sets that matter.

SBOMs also give IBM Concert a handy overview of how different data elements within an application are related to one another. By defining those connections and dependencies in advance, IBM Concert can then focus on analyzing data flows across that architecture instead of trying to generate a theory of how an application operates from scratch.

SBOMs also assist IBM Concert by providing a standardized data format which identifies relevant data sources. While the “language” of every application may be different, SBOMs serve as a type of translation layer, which helps to differentiate risk data from network data, cost information from security information. With these guardrails in place, IBM Concert has a reference point to start its analysis.

Your next step: SBOMs as a source of truth


Since SBOMs are a staple of security and compliance teams, it’s likely that your application already has this information ready for use. It’s simply a matter of making sure your SBOM is up to date and then repurposing that information by uploading it into IBM Concert. Even this simple step will pave the way for valuable strategic insights into your application.

Source: ibm.com

Monday, 8 July 2024

Re-evaluating data management in the generative AI age

Re-evaluating data management in the generative AI age

Generative AI has altered the tech industry by introducing new data risks, such as sensitive data leakage through large language models (LLMs), and driving an increase in requirements from regulatory bodies and governments. To navigate this environment successfully, it is important for organizations to look at the core principles of data management. And ensure that they are using a sound approach to augment large language models with enterprise/non-public data.

A good place to start is refreshing the way organizations govern data, particularly as it pertains to its usage in generative AI solutions. For example:

◉ Validating and creating data protection capabilities: Data platforms must be prepped for higher levels of protection and monitoring. This requires traditional capabilities like encryption, anonymization and tokenization, but also creating capabilities to automatically classify data (sensitivity, taxonomy alignment) by using machine learning. Data discovery and cataloging tools can assist but should be augmented to make the classification specific to the organization’s understanding of its own data. This allows organizations to effectively apply new policies and bridge the gap between conceptual understandings of data and the reality of how data solutions have been implemented.

◉ Improving controls, auditability and oversight: Data access, usage and third-party engagement with enterprise data requires new designs with existing solutions. For example,  capture a portion of the requirements that are needed to ensure authorized usage of the data. But firms need complete audit trails and monitoring systems. This is to track how data is used, when data is modified, and if data is shared through third-party interactions for both gen AI and non-gen AI solutions. It is no longer sufficient to control data by restricting access to it, and we should also track the use cases for which data is accessed and applied within analytical and operational solutions. Automated alerts and reporting of improper access and usage (measured by query analysis, data exfiltration and network movement) should be developed by infrastructure and data governance teams and reviewed regularly to proactively ensure compliance.

◉ Preparing data for gen AI: There is a departure from traditional data management patterns and skills which requires new discipline to ensure the quality, accuracy and relevance of data for training and augmenting language models for AI use. With vector databases becoming commonplace in the gen AI domain, data governance must be enhanced to account for non-traditional data management platforms. This is to ensure that the same governance practices are applied to these new architectural components. Data lineage becomes even more important as the need to provide “Explainability” in models is required by regulatory bodies.

Enterprise data is often complex, diverse and scattered across various repositories, making it difficult to integrate into gen AI solutions. This complexity is compounded by the need to ensure regulatory compliance, mitigate risk, and address skill gaps in data integration and retrieval-augmented generation (RAG) patterns. Moreover, data is often an afterthought in the design and deployment of gen AI solutions, leading to inefficiencies and inconsistencies.

Unlocking the full potential of enterprise data for generative AI


At IBM, we have developed an approach to solving these data challenges. The IBM gen AI data ingestion factory, a managed service designed to address AI’s “data problem” and unlock the full potential of enterprise data for gen AI. Our predefined architecture and code blueprints that can be deployed as a managed service simplify and accelerate the process of integrating enterprise data into gen AI solutions. We approach this problem with data management in mind, preparing data for governance, risk and compliance from the outset. 

Our core capabilities include:

◉ Scalable data ingestion: Re-usable services to scale data ingestion and RAG across gen AI use cases and solutions, with optimized chunking and embedding patterns.
◉ Regulatory and compliance: Data is prepared for gen AI usage that meets current and future regulations, helping companies meet compliance requirements with market regulations focused on generative AI.
◉ Data privacy management: Long-form text can be anonymized as it is discovered, reducing risk and ensuring data privacy.

The service is AI and data platform agnostic, allowing for deployment anywhere, and it offers customization to client environments and use cases. By using the IBM gen AI data ingestion factory, enterprises can achieve several key outcomes, including:

◉ Reducing time spent on data integration: A managed service that reduces the time and effort required to solve for AI’s “data problem”. For example, using a repeatable process for “chunking” and “embedding” data so that it does not require development efforts for each new gen AI use case.
◉ Compliant data usage: Helping to comply with data usage regulations focused on gen AI applications deployed by the enterprise. For example, ensuring data that is sourced in RAG patterns is approved for enterprise usage in gen AI solutions.
◉ Mitigating risk: Reducing risk associated with data used in gen AI solutions. For example, providing transparent results into what data was sourced to produce an output from a model reduces model risk and time spent proving to regulators how information was sourced.
◉ Consistent and reproducible results: Delivering consistent and reproducible results from LLMs and gen AI solutions. For example, capturing lineage and comparing outputs (that is, data generated) over time to report on consistency through standard metrics such as ROUGE and BLEU.

Navigating the complexities of data risk requires a cross-functional expertise. Our team of former regulators, industry leaders and technology experts at IBM Consulting are uniquely positioned to address this with our consulting services and solutions.

Source: ibm.com

Saturday, 6 July 2024

Putting AI to work in finance: Using generative AI for transformational change

Putting AI to work in finance: Using generative AI for transformational change

Finance leaders are no strangers to the complexities and challenges that come with driving business growth. From navigating the intricacies of enterprise-wide digitization to adapting to shifting customer spending habits, the responsibilities of a CFO have never been more multifaceted.

Amidst this complexity lies an opportunity. CFOs can harness the transformative power of generative AI (gen AI) to revolutionize finance operations and unlock new levels of efficiency, accuracy and insights.

Generative AI is a game-changing technology that promises to reshape the finance industry as we know it. By using advanced language models and machine learning algorithms, gen AI can automate and streamline a wide range of finance processes, from financial analysis and reporting to procurement, and accounts payable.

Realizing the staggering benefits of adopting gen AI in finance


According to research by IBM, organizations that have effectively implemented AI in finance operations have experienced the following benefits:

  • 33% faster budget cycle time
  • 43% reduction in uncollectible balances
  • 25% lower cost per invoice paid

However, to successfully integrate gen AI into finance operations, it’s essential to take a strategic and well-planned approach. AI and gen AI initiatives can only be as successful as the underlying data permits. Enterprises often undertake various data initiatives to support their AI strategy, ranging from process mining to data governance.

After the right data initiatives are in place, you’ll want to build the right structure to successfully integrate gen AI into finance operations. This can be achieved by defining a clear business case articulating benefits and risks, securing necessary funding, and establishing measurable metrics to track ROI.

Next, automate labor-intensive tasks by identifying and targeting tasks that are ripe for gen AI automation, starting with risk-mitigation use cases and encouraging employee adoption aligned with real-world responsibilities.

You’ll also want to use gen AI to fine-tune FinOps by implementing cost estimation and tracking frameworks, simulating financial data and scenarios, and improving the accuracy of financial models, risk management, and strategic decision-making.

Prioritizing responsibility with trusted partners


As finance leaders navigate the gen AI landscape, it’s crucial to prioritize responsible and ethical AI practices. Data lineage, security and privacy are paramount concerns that CFOs must address proactively.

By partnering with trusted organizations like IBM, which adheres to stringent Principles for Trust and Transparency and Pillars of Trust, finance teams can ensure that their gen AI initiatives are built on a foundation of integrity, transparency, and accountability.


Source: ibm.com

Friday, 5 July 2024

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

IBM Storage Defender is a purpose-built end-to-end data resilience solution designed to help businesses rapidly restart essential operations in the event of a cyberattack or other unforeseen events. It simplifies and orchestrates business recovery processes by providing a comprehensive view of data resilience and recoverability across primary and  auxiliary storage in a single interface.

IBM Storage Defender deploys AI-powered sensors to quickly detect threats and anomalies. Signals from all available sensors are aggregated by IBM Storage Defender, whether they come from hardware (IBM FlashSystem FlashCore Modules) or software (file system or backup-based detection).

IBM Storage FlashSystem with FlashCore Module 4 (FCM4) can identify threats in real-time by building into the hardware, collect and analyze stats for every single read and write operation without any performance impact. IBM Storage Defender and IBM Storage FlashSystem can seamlessly work together to produce a multilayered strategy that can drastically reduce the time needed to detect a ransomware attack.

As shown in the following diagram, the FlashCore Module reports potential threat activity to IBM Storage Insights Pro, which analyzes the data and alerts IBM Storage Defender about suspicious behaviors coming from the managed IBM Storage FlashSystem arrays.  With the information received, IBM Storage Defender proactively opens a case.  All open cases are presented in a comprehensive “Open case” screen, which provides detailed information about the type of anomaly, time and date of the event, affected virtual machines and impacted storage resources. To streamline data recovery, IBM Storage Defender provides recommended actions and built-in automation to further accelerate the return of vital operations to their normal state.

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

IBM Storage FlashSystem also offers protection through immutable copies of data known as Safeguarded Copies, which are isolated from production environments and cannot be modified or deleted. IBM Storage Defender can recover workloads directly from the most recent trusted Safeguarded Copy to significantly reduce the time needed to resume critical business operations, as data transfer is performed through the SAN (FC or iSCSI) rather than over the network.  In addition, workloads can be restored in an isolated “Clean Room” environment to be analyzed and validated before being recovered to production systems. This verification allows you to know with certainty that the data is clean and business operations can be safely reestablished. This is shown in the following diagram.

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

When a potential threat is detected, IBM Storage Defender correlates the specific volume in the IBM Storage FlashSystem associated with the virtual machine under attack and proactively takes a Safeguarded Copy to create a protected backup of the affected volume for offline investigation and follow-up recovery operations. When time is crucial, this rapid, automatic action can significantly reduce the time between receiving the alert, containing the attack and subsequent recovery. This proactive action is shown in the following diagram.

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

Ensuring business continuity is essential to build operational resilience and trust, IBM Storage Defender and IBM Storage FlashSystem can be seamlessly integrated to achieve this goal by combining advanced capabilities that complement each other to build a robust data resilience strategy across primary and auxiliary storage. By working together, IBM Storage Defender and IBM Storage FlashSystem effectively combat cyberattacks and other unforeseen threats.

Source: ibm.com

Thursday, 4 July 2024

Blueprint for 15%+ higher wrench time: Superior integration of EAM, mobility and procurement in operations and maintenance

Blueprint for 15%+ higher wrench time: Superior integration of EAM, mobility and procurement in operations and maintenance

During conversations with senior leadership who worked for global aerospace companies to Fortune 100 manufacturers, they shared firsthand how inadequate asset management and cumbersome procurement processes led to significant inefficiencies and hindered organizational performance.

One story, during peak production, involved a Computer Numerical Control machine failure, that disrupted manufacturing (not dissimilar in impact from a telecom base station or utility transformer failure). Diagnosing the problem on-site and digging through paper maintenance logs took forever. Obtaining a replacement part involved manual entries and approvals in a separate procurement system, further dragging out the process.

Recalling situations like that, they wish they had today’s modern mobile enterprise asset management (EAM) solutions and best practices for procurement integration.

Here’s the blueprint that smart manufacturing, telecom, utility, and government contracting leaders use to increase wrench time by 15%+ for operations, maintenance, and facilities techs:

Embrace real-time technician mobility and asset data accessibility


  • Digital field processes with mobility: Eliminate inefficient steps, travel, dual entry and wait time associated with paper-based processes.
  • Immediate information: Technicians can access maintenance histories, manuals and schematics directly on mobile devices, slashing downtime and expediting responses.
  • Seamless integration: Real-time procurement data and status updates are integrated into EAM systems, ensuring that crucial information is available at all times.

Integrate EAM with procurement processes


  • Automated triggers: When a machine fails, an integrated EAM automatically initiates the procurement process for needed parts, eliminating delays caused by manual processes.
  • Efficiency in procurement: The procurement process is fully automated, from requisition to order placement, reducing time and administrative burdens.

Implement advanced workflow automation


  • Streamlined issue logging: Automating the logging of issues, alerting technicians and tracking resolutions within EAM ensures that no steps are missed and progress is continuous.
  • Faster procurement: Automated workflows reduce manual entry errors and speed up the parts ordering process, crucial during unexpected downtimes.

Leverage decision support through data


  • Informed decisions: EAM systems provide analytics and historical data on machine performance, aiding technicians in making quick, informed decisions.
  • Simplified ordering: Detailed parts diagrams and itemized pick lists help technicians quickly identify and order necessary parts.

Enhance support with remote diagnostics


  • Remote assistance: Modern EAM includes remote diagnostics and augmented reality (AR) support to guide technicians through complex repairs remotely.
  • Efficient delivery: Procurement practices must ensure quick and efficient delivery of required parts by aligning closely with suppliers.

Blueprint in action


With 1,400 employees across 25 service locations, Skookum Contract Service faced inefficiencies in its work, asset and parts procurement process. This resulted in over 3,000 hours of lost productivity annually at one location alone. By adopting best practices and processes for seamless integration between its asset management and procurement functions, Skookum transformed its operations.

  • Implemented a digital workflow through EAM and mobility that streamlined and digitized asset, work and logistics processes, improving tech wrench time by 15%.
  • Created an integrated process and technology solution that led to a paperless work order environment.
  • Streamlined part searches and procurement workflows, reducing the number of steps from 15 to a single seamless digital workflow.
  • Enabled technicians to quickly and easily access parts using mobile devices.

The result: Skookum maintained on-time, on-budget project delivery, enhancing overall efficiency and productivity. They achieved these results by integrating advanced procurement and mobile EAM technologies.

Technology enablers: IBM Maximo Mobile and Varis purchasing platform


Skookum transformed by integrating Varis and IBM Maximo. Varis streamlined procurement with a user-friendly digital marketplace. IBM Maximo’s mobile features offered real-time access to asset data, maintenance histories and remote diagnostics. This integration allowed technicians to manage and procure parts efficiently, reducing downtime and boosting productivity.

Integrating modern EAM solutions with advanced asset, work and procurement processes significantly boosts wrench time and reduces technician downtime. Embracing real-time asset data, automating procurement, streamlining workflows, leveraging data-driven support and using remote diagnostics lead to notable efficiency gains. Skookum Contract Services exemplifies these benefits, showing how tools like Varis and IBM Maximo drive operational improvements, ensuring projects finish on time and within budget.

Source: ibm.com