Thursday 30 November 2023

How blockchain enables trust in water trading

How blockchain enables trust in water trading

Australia pioneered water rights trading in the early 1900s, becoming a world leader in water sharing between valleys. The initiative extended throughout the states of Australia across the Murray-Darling Basin (MDB). However, findings from the water market’s inquiry of the MDB, completed by the Australian Consumer and Competition Commission (ACCC) and the Department of Climate Change, Energy, the Environment and Water (DCCEEW), highlighted a great many challenges of the system.

These challenges include a combination of paper-based and digital processing, slow processing, ambiguity and a loss of trust in the market. The outcomes of these challenges have manifested in various ways including reduced water quality due to shortages and reduced environmental flows. The impact of these outcomes were most expressively reported in the Australian media via images of fish kills.

Forecasts indicate that water scarcity will present a greater challenge in meeting long-term sustainability goals as the climate continues to change. While there is no silver bullet to solve the challenges of the water market, blockchain technology has the capacity to partially solve these challenges by increasing trust, transparency of trading and validation of market participants.

A path forward with a capable partnership


The ACCC recommended many changes to improve the water trading market, including using ‘distributed ledger technology’ and a ‘backbone platform’ for water trading. Arup and IBM saw these recommendations as an opportunity to take a leadership role and propose the implementation of a blockchain-enabled water trading platform.

Blockchain is a shared, immutable ledger that facilitates the process of recording transactions and tracking tangible or intangible assets in a business network. Virtually anything of value can be tracked and traded on a blockchain network, reducing risk and cutting costs for all involved. As each transaction occurs, it’s put into a block. Each block is connected to the one before and after it. Transactions are blocked together, creating an irreversible chain.

Equigy, IBM Food Trust and Plastic Bank are instances of how blockchain can be successfully applied to tackle global societal problems across different industries, while ensuring trust and accountability of transactions.

Application of blockchain to water trading markets


A blockchain-enabled trading platform solves issues of transparency, incomplete information and lost opportunities. Where there is a lack of transparency of information, a distributed ledger shows all trades, allowing market participants to audit the blockchain at their convenience. A blockchain contains all the information (for all to access), which eliminates the issue of incomplete information. And blockchain only works with the buy-in and unanimous agreement of all participants, which creates trust.

Participants in the market enquiries conducted during the ACCC’s investigation acknowledged that the complexity of the market and the carriage of incomplete information (along with inaccessibility) led to inefficiency, lack of transparency and lost opportunities. The findings centered around four central areas of focus:

1. Integrity and transparency
2. Data and systems
3. Market architecture
4. Governance.

These areas aim to ensure water markets have integrity safeguards and participants have the information they need to make informed trading decisions.

Although the methodology that Arup and IBM proposed does not achieve all the recommendations made by the ACCC or by DCCEEW’s reports, it does provide a solution for actioning several recommendations to push the water trading market in the right direction. In response to these reports, Arup and IBM have focussed on what is achievable without the need for new policy to be written and agreed upon. Instead, the team focussed on recommendations made by the reports including improvement of market architecture and trade processes and information.

Next steps


The Arup and IBM team see this proposed methodology as a viable solution that should be tested in the real water trading market. In the same way that different states and territories across Australia trialled different COVID tracing measures (finally adopting the most successful methodology) we see this as one methodology to trial.

The complexities of the inter-valley trading (where a trade occurs across state borders) would be most robust to test against. Negotiating the state departments, differing nomenclature and varying approaches is a key sandbox for establishing a viable trading platform. The team have proposed inter-valley trading market locations where borders are shared between New South Wales and Victoria or Victoria and South Australia.

Achieving more from less is where digital technologies play a pivotal role, and we believe that a blockchain-enabled water trading platform for the Australian water markets will allow market participants to access sufficient supply, despite climate change forecasting less. A discussion paper on how blockchain technology can improve accountability and transparency within the Murray-Darling Basin can be downloaded below.

Source: ibm.com

Thursday 23 November 2023

Level up your Kafka applications with schemas

Level up your Kafka applications with schemas

Apache Kafka is a well-known open-source event store and stream processing platform and has grown to become the de facto standard for data streaming. In this article, developer Michael Burgess provides an insight into the concept of schemas and schema management as a way to add value to your event-driven applications on the fully managed Kafka service, IBM Event Streams on IBM Cloud.

What is a schema?


A schema describes the structure of data.

For example:

A simple Java class modelling an order of some product from an online store might start with fields like:

public class Order{

private String productName

private String productCode

private int quantity

[…]

}

If order objects were being created using this class, and sent to a topic in Kafka, we could describe the structure of those records using a schema such as this Avro schema:

{
"type": "record",
"name": “Order”,
"fields": [
{"name": "productName", "type": "string"},
{"name": "productCode", "type": "string"},
{"name": "quantity", "type": "int"}
]
}

Why should you use a schema?


Apache Kafka transfers data without validating the information in the messages. It does not have any visibility of what kind of data are being sent and received, or what data types it might contain. Kafka does not examine the metadata of your messages.

One of the functions of Kafka is to decouple consuming and producing applications, so that they communicate via a Kafka topic rather than directly. This allows them to each work at their own speed, but they still need to agree upon the same data structure; otherwise, the consuming applications have no way to deserialize the data they receive back into something with meaning. The applications all need to share the same assumptions about the structure of the data.

In the scope of Kafka, a schema describes the structure of the data in a message. It defines the fields that need to be present in each message and the types of each field.

This means a schema forms a well-defined contract between a producing application and a consuming application, allowing consuming applications to parse and interpret the data in the messages they receive correctly.

What is a schema registry?


A schema registry supports your Kafka cluster by providing a repository for managing and validating schemas within that cluster. It acts as a database for storing your schemas and provides an interface for managing the schema lifecycle and retrieving schemas. A schema registry also validates evolution of schemas.

Optimize your Kafka environment by using a schema registry.


A schema registry is essentially an agreement of the structure of your data within your Kafka environment. By having a consistent store of the data formats in your applications, you avoid common mistakes that can occur when building applications such as poor data quality, and inconsistencies between your producing and consuming applications that may eventually lead to data corruption. Having a well-managed schema registry is not just a technical necessity but also contributes to the strategic goals of treating data as a valuable product and helps tremendously on your data-as-a-product journey.

Using a schema registry increases the quality of your data and ensures data remain consistent, by enforcing rules for schema evolution. So as well as ensuring data consistency between produced and consumed messages, a schema registry ensures that your messages will remain compatible as schema versions change over time. Over the lifetime of a business, it is very likely that the format of the messages exchanged by the applications supporting the business will need to change. For example, the Order class in the example schema we used earlier might gain a new status field—the product code field might be replaced by a combination of department number and product number, or changes the like. The result is that the schema of the objects in our business domain is continually evolving, and so you need to be able to ensure agreement on the schema of messages in any particular topic at any given time.

There are various patterns for schema evolution:

  • Forward Compatibility: where the producing applications can be updated to a new version of the schema, and all consuming applications will be able to continue to consume messages while waiting to be migrated to the new version.
  • Backward Compatibility: where consuming applications can be migrated to a new version of the schema first, and are able to continue to consume messages produced in the old format while producing applications are migrated.
  • Full Compatibility: when schemas are both forward and backward compatible.

A schema registry is able to enforce rules for schema evolution, allowing you to guarantee either forward, backward or full compatibility of new schema versions, preventing incompatible schema versions being introduced.

By providing a repository of versions of schemas used within a Kafka cluster, past and present, a schema registry simplifies adherence to data governance and data quality policies, since it provides a convenient way to track and audit changes to your topic data formats.

What’s next?


In summary, a schema registry plays a crucial role in managing schema evolution, versioning and the consistency of data in distributed systems, ultimately supporting interoperability between different components. Event Streams on IBM Cloud provides a Schema Registry as part of its Enterprise plan. Ensure your environment is optimized by utilizing this feature on the fully managed Kafka offering on IBM Cloud to build intelligent and responsive applications that react to events in real time.

Source: ibm.com

Tuesday 21 November 2023

Asset lifecycle management strategy: What’s the best approach for your business?

Asset lifecycle management strategy: What’s the best approach for your business?

Assets are the lifeblood of any successful business—from software programs tailored to meet an enterprise’s unique needs to a pipeline that stretches across oceans. One of the most important strategic decisions a business leader can make is how these assets are cared for over the course of their lifespans.

Whether you’re a small enterprise with only a few assets or a large-scale corporation with offices spanning the globe, asset lifecycle management, or ALM, is a fundamental part of your operations. Here’s what you need to know in order to build a successful strategy.

What is an asset?


First, let’s talk about what an asset is and why they are so important. Companies use assets to create value. There are many different types of assets, both physical and non-physical. Examples of physical assets include equipment, office buildings and vehicles. Non-physical assets include intellectual property, trademarks and patents.

What is asset lifecycle management?


Each asset a company acquires goes through six main stages over the course of its life, requiring careful maintenance planning and management tactics to provide its owners with a strong return on investment.

The following are the six stages of asset lifecycle management:

  1. Planning: In the first stage of the asset lifecycle, stakeholders assess the need for the asset, its projected value to the organization and its overall cost.
  2. Valuation: A critical part of the planning stage is assessing the overall value of an asset. Decision-makers must take into account many different pieces of information in order to gauge this, including the assets likely length of useful life, its projected performance over time and the cost of disposing of it. One technique that is becoming increasingly valuable during this stage is the creation of a digital twin.  
  3. Digital twin technology: A digital twin is a virtual representation of an asset a company intends to acquire that assists organizations in their decision-making process. Digital twins allow companies to run tests and predict performance based on simulations. With a good digital twin, its possible to predict how well an asset will perform under the conditions it will be subjected to.
  4. Procurement and installation: The next stage of the asset lifecycle concerns the purchase, transportation and installation of the asset. During this stage, operators will need to consider a number of factors, including how well the new asset is expected to perform within the overall ecosystem of the business, how its data will be shared and incorporated into business decisions, and how it will be put into operation and integrated with other assets the company owns.
  5. Utilization: This phase is critical to maximizing asset performance over time and extending its lifespan. Recently, enterprise asset management systems (EAMs) have become an indispensable tool in helping businesses perform predictive and preventive maintenance so they can keep assets running longer and generating more value. We’ll go deeper into EAMs, the technologies underpinning them and their implications for asset lifecycle management strategy in another section.
  6. Decommissioning: The final stage of the asset lifecycle is the decommissioning of the asset. Valuable assets can be complex and markets are always shifting, so during this phase, it’s important to weigh the depreciation of the current asset against the rising cost of maintaining it in order to calculate its overall ROI. Decision-makers will want to take into consideration a variety of factors when attempting to measure this, including asset uptime, projected lifespan and the shifting costs of fuel and spare parts.

The benefits of asset lifecycle management strategy


When you’ve invested your hard-earned capital in the acquisition of assets, it’s important to keep them running at peak levels for as long as possible. Systematizing and executing an effective asset management strategy can produce a wide range of benefits for your organization, including the following:

  • Scalability of best practices: Today’s asset lifecycle management strategies use cutting-edge technologies coupled with rigorous, systematic approaches to forecast, schedule and optimize all daily maintenance tasks and long-term repair needs.
  • Streamlined operations and maintenance: Minimize the likelihood of equipment failure, anticipate breakdowns and perform preventive maintenance when possible. Today’s top EAM systems dramatically improve the decision-making capabilities of managers, operators and maintenance technicians by giving them real-time visibility into equipment status and workflows.
  • Reduced maintenance costs and downtime: Monitor assets in real time, regardless of complexity. By coupling asset information (thanks to the Internet of Things (IoT)) with powerful analytics capabilities, businesses can now perform cost-effective preventive maintenance, intervening before a critical asset fails and preventing costly downtime.
  • Greater alignment across business units: Optimize management processes according to a variety of factors beyond just the condition of a piece of equipment. These factors can include available resources (e.g., capital and manpower), projected downtime and its implications for the business, worker safety, and any potential security risks associated with the repair.
  • Improved compliance: Comply with laws surrounding the management and operation of assets, regardless of where they are located. Data management and storage requirements vary widely from country to country and are constantly evolving. Avoid costly penalties by monitoring assets in a strategic, systematized manner that ensures compliance—no matter where data is being stored.   

How to build an effective asset lifecycle management strategy

Because of the increased complexity of asset maintenance and the technologies required to build an effective maintenance strategy, many businesses utilize enterprise asset management, coupled with a strong computerized management system (CMMS) and advanced asset tracking capabilities to manage their most valuable assets.

Enterprise asset management systems (EAMs)


Enterprise asset management systems (EAMs) are a component of asset lifecycle management strategy that combines asset management software, systems and services to lengthen asset lifespan and increase productivity. Many rely on a CMMS to monitor assets in real time and recommend maintenance when necessary. Top-performing EAM systems monitor asset performance and maintain a historical record of critical activity, such as when it was purchased, when it was last repaired and how much its cost an organization over time.

Computerized maintenance management systems (CMMS)


Computerized maintenance management systems (CMMS) are software systems that maintain a database of an organization’s maintenance operations and help extend the lifespan of assets. Many industries rely on CMMS as a component of EAM, including manufacturing, oil and gas production, power generation, construction and transportation. One of the most popular features of CMMS is its ability to spot opportunities for companies to perform regular preventive maintenance on their most valuable assets.

Preventive maintenance


Preventive maintenance helps prevent the unexpected failure of an asset by recommending maintenance activities according to a historical record and current performance metrics. Put simply, it’s about fixing things before they break. Through machine learning, operational data analytics and predictive asset health monitoring, today’s top-performing asset lifecycle management strategies optimize maintenance and reduce reliability risks to plant or business operations. EAM systems and a CMMS designed to support preventive maintenance can help produce stable operations, ensure compliance and resolve issues impacting production—before they happen.

Asset tracking


Asset tracking is another important component of asset lifecycle management strategy. Like EAM and CMMS, asset-tracking capabilities have also improved in recent years due to technological breakthroughs. Here are some of the most effective technologies available today for tracking assets.

  • Radio frequency identifier tags (RFID): RFID tags broadcast information about the asset they’re attached to using radio-frequency signals and Bluetooth technology. They can transmit a wide range of important information, including asset location, temperature and even the humidity of the environment the asset is located in.
  • WiFi-enabled tracking: Like RFIDs, WiFi-enabled tracking devices monitor a range of useful information about an asset, but they only work if the asset is within range of a WiFi network.
  • QR codes: Like their predecessor, the universal barcode, QR codes provide asset information quickly and accurately. But unlike barcodes, they are two-dimensional and easily readable with a smartphone from any angle.
  • Global positioning satellites (GPS): With a GPS system, a tracker is placed on an asset that then communicates information to the Global Navigation Satellite System (GNSS) network. By transmitting a signal to a satellite, the system enables managers to track an asset anywhere on the globe, in real time.

Asset lifecycle management strategy solutions


Many of today’s asset lifecycle management (ALM) solutions leverage cutting-edge technology like real-time data delivered via IoT, AI-enhanced analytics and monitoring, cloud-based capabilities and powerful automation that can help streamline workflows. Enterprise asset management (EAM) with the IBM Maximo Application Suite helps companies optimize asset performance, extend asset lifespans and reduce downtime and cost. It’s a fully integrated platform that uses advanced analytics tools and IoT data to improve operational availability and spot opportunities to perform preventive maintenance.

Source: ibm.com

Saturday 18 November 2023

Creating a sustainable future with the experts of today and tomorrow

Creating a sustainable future with the experts of today and tomorrow

When extreme weather strikes, it hits vulnerable populations the hardest. In the current global climate of stronger and more frequent storms, heat waves, droughts and floods, how do we build more positive environmental and social impact? We have a responsibility to apply our technological expertise, resources and ecosystem to help the world become more resilient to these environmental challenges.

We need a three-pronged approach to long-term sustainability: preparing the workforce with skills for a greener future; forging strategic cross-sector partnerships; and empowering purpose-driven individuals and organizations with the right tools and technology to accelerate action.

Equipping the current and future workforce with green skills


According to new Morning Consult research commissioned by IBM, 71% of business leaders surveyed anticipate their business will emphasize sustainability skills criteria in their hiring in the next two years, with 92% expecting to invest in sustainability training in the next year. There is already a skills gap in technology and sustainability, and these results show that it continues to grow.

But when it comes to training and credentials in green and technology skills, there just aren’t that many options. IBM already has a strong track record of providing free skilling resources to communities that are underrepresented in tech, most recently with a commitment to skill 2 million learners in AI. So, to help prepare the experts of tomorrow with the green and technology skills they need, we are providing free training on IBM SkillsBuild.

Our initial curriculum offerings will include three courses: Sustainability and Technology Fundamentals, Data Analytics for Sustainability and Enterprise Thinking for Sustainability. Through these foundational courses, learners will explore topics like ecology, biodiversity and social impact to help them develop a comprehensive understanding of sustainability. 

Lessons will include real-life case studies and opportunities to learn about how AI can assist businesses in achieving sustainability goals and mitigating climate risks. The courses also provide instruction in data analytics contextualized around sustainability use cases. We will also add more advanced courses that take a deeper look at how data analysis and visualization skills can be applied to practical sustainability use cases, such as examining energy consumption in a community. 

These courses are available to high school students, university students and faculty, and adult learners worldwide. Learners are free to take as many courses as they want and to study at their own pace. Upon successful completion of some of these courses, learners receive a credential that is recognized by employers.

IBM SkillsBuild has a global reach, and it has already benefited many learners with the inspiration and resources they need to pursue careers in technology. For instance, in Nigeria, Clinton Chidubem Amam found employment as a graphics designer after completing IBM SkillsBuild courses, and his work was displayed at the World Economic Forum in Davos earlier this year. Meanwhile, Oscar Ramirez, who arrived in the US as a child from Mexico, was able to investigate everything from AI to cybersecurity and project management while finishing his studies in Applied Mathematics and Computational Mathematics at San Jose State University.

Uniting sustainability experts in strategic partnerships


Whether it’s closing the green skills gap or tackling environmental challenges, you can’t go at it alone. Addressing big challenges requires collaboration and strategic partnership with experts that intimately understand the nuances of different domains.

That’s why IBM’s award-winning pro-bono social impact program, the IBM Sustainability Accelerator, selects innovative organizations focused on solutions worth scaling. In this program, diverse cross-sector experts in topics such as sustainable agriculture and renewable energy come together from both inside and outside IBM. Using a human-centered approach along with IBM Garage, artificial intelligence, advances in data, cloud and other technologies, these teams collaborate on projects to help vulnerable populations become more resilient to climate change.

Five organizations are now joining this initiative on the path toward clean water and sanitation for all (UN SDG6):

  • The University of Sharjah will build a model and application to monitor and forecast water access conditions in the Middle East and North Africa to support communities in arid and semi-arid regions with limited renewable internal freshwater resources.
  • The University of Chicago Trust in Delhi will aggregate water quality information in India, build and deploy tools designed to democratize access to water quality information, and help improve water resource management for key government and nonprofit organizations.
  • The University of Illinois will develop an AI geospatial foundation model to help predict rain fall and flood forecasting in mountain headwaters across the Appalachian Mountains in the US.
  • Instituto IGUÁ will create a cloud-based platform for sanitation infrastructure planning in Brazil alongside local utility providers and governments.
  • Water Corporation will design a self-administered water quality testing system for Aboriginal communities in Western Australia.

We’re excited to partner with organizations that deeply understand the water and sanitization challenges that communities face. IBM has committed to support our sustainability accelerator projects, including our sustainable agriculture and clean energy cohorts, with USD 30 million worth of services by 2025.

Supporting a just transition for all


To build a more sustainable world, we must empower communities with the skills, tools and support they need to adapt to environmental hazards with resilience. By providing access to IBM technology and know-how, we can empower the communities most vulnerable to the effects of extreme weather and climate change. And by democratizing access to sustainability education through IBM SkillsBuild, we help the next generation of experts realize their passion for applying advanced technology to preserve and protect the environment. These efforts, along with our strategic partnerships, will lead us all into a more sustainable future.

Source: ibm.com

Friday 17 November 2023

An introduction to Wazi as a Service

An introduction to Wazi as a Service

In today’s hyper-competitive digital landscape, the rapid development of new digital services is essential for staying ahead of the curve. However, many organizations face significant challenges when it comes to integrating their core systems, including Mainframe applications, with modern technologies. This integration is crucial for modernizing core enterprise applications on hybrid cloud platforms. Shockingly, a staggering 33% of developers lack the necessary skills or resources, hindering their productivity in delivering products and services. Moreover, 36% of developers struggle with the collaboration between development and IT Operations, leading to inefficiencies in the development pipeline. To compound these issues, repeated surveys highlight “testing” as the primary area causing delays in project timelines. Companies like State Farm and BNP Paribas are taking steps to standardize development tools and approaches across their platforms to overcome these challenges and drive transformation in their business processes.

How does Wazi as Service help drive modernization? 

An introduction to Wazi as a Service

One solution that is making waves in this landscape is “Wazi as a Service.” This cloud-native development and testing environment for z/OS applications is revolutionizing the modernization process by enabling secure DevSecOps practices. With flexible consumption-based pricing, it provides on-demand access to z/OS systems, dramatically improving developer productivity by accelerating release cycles on secure, regulated hybrid cloud environments like IBM Cloud Framework for Financial Services (FS Cloud). Shift-left coding practices allow testing to begin as early as the code-writing stage, enhancing software quality. The platform can be automated through a standardized framework validated for Financial Services, leveraging the IBM Cloud Security and Compliance Center service (SCC). Innovating at scale is made possible with IBM Z modernization tools like Wazi Image Builder, Wazi Dev Spaces on OpenShift, CI/CD pipelines, z/OS Connect for APIs, zDIH for data integrations, and IBM Watson for generative AI.

What are the benefits of Wazi as a service on IBM Cloud?


Wazi as a Service operates on IBM LinuxONE, an enterprise-grade Linux server, providing a substantial speed advantage over emulated x86 machine environments. This unique feature makes it 15 times faster, ensuring swift and efficient application development. Furthermore, Wazi bridges the gap between developer experiences on distributed and mainframe platforms, facilitating the development of hybrid applications containing z/OS components. It combines the power of the z-Mod stack with secure DevOps practices, creating a seamless and efficient development process. The service also allows for easy scalability through automation, reducing support and maintenance overhead, and can be securely deployed on IBM FS Cloud, which comes with integrated security and compliance features. This means developers can build and deploy their environments and code with industry-grade regulations in mind, ensuring data security and regulatory compliance.

Additionally, Wazi VSI on VPC infrastructure within IBM FS Cloud establishes an isolated network, fortifying the cloud infrastructure’s perimeter against security threats. Furthermore, IBM Cloud services and ISVs validated for financial services come with robust security and compliance controls, enabling secure integration of on-prem core Mainframe applications with cloud services like API Connect, Event Streams, Code Engine, and HPCS encryptions. This transformation paves the way for centralized core systems to evolve into modernized, distributed solutions, keeping businesses agile and competitive in today’s digital landscape. Overall, Wazi as a Service is a game-changer in accelerating digital transformation while ensuring security, compliance, and seamless integration between legacy and modern technologies.

How IBM Cloud Financial Service Framework help in industry solutions?


The IBM Cloud Framework for Financial Services a.k.a IBM FS Cloud is a robust solution designed specifically to cater to the unique needs of financial institutions, ensuring regulatory compliance, top-notch security, and resiliency both during the initial deployment phase and in ongoing operations. This framework simplifies interactions between financial institutions and ecosystem partners that provide software or SaaS applications by establishing a set of requirements that all parties must meet. The key components of this framework include a comprehensive set of control requirements, which encompass security and regulatory compliance obligations, as well as cloud best practices. These best practices involve a shared responsibility model that applies to financial institutions, application providers, and IBM Cloud, ensuring that everyone plays a part in maintaining a secure and compliant environment.

Additionally, the IBM Cloud Framework for Financial Services provides detailed control-by-control guidance for implementation and offers supporting evidence to help financial institutions meet the rigorous security and regulatory requirements of the financial industry. To further facilitate compliance, reference architectures are provided to assist in the implementation of control requirements. These architectures can be deployed as infrastructure as code, streamlining the deployment and configuration process. IBM also offers a range of tools and services, such as the IBM Cloud Security and Compliance Center, to empower stakeholders to monitor compliance, address issues, and generate evidence of compliance efficiently. Furthermore, the framework is subject to ongoing governance, ensuring that it remains up-to-date and aligned with new and evolving regulations, as well as the changing needs of banks and public cloud environments. In essence, the IBM Cloud Framework for Financial Services is a comprehensive solution that empowers financial institutions to operate securely and in compliance with industry regulations, while also streamlining their interactions with ecosystem partners.

Get to know Wazi as a Service


Operating on the robust IBM LinuxONE infrastructure, Wazi as a Service bridges the gap between distributed and mainframe platforms, enabling seamless hybrid application development. The platform’s scalability, automation, and compliance features empower developers to navigate the intricate web of regulations and security, paving the way for businesses to thrive in the digital era. With Wazi, businesses can securely integrate on-premises core systems with cutting-edge cloud services, propelling them into the future of modernized, distributed solutions. In summary, Wazi as a Service exemplifies the transformative potential of technology in accelerating digital transformation, underlining its importance in achieving security, compliance, and the harmonious coexistence of legacy and modern technologies.

Source: ibm.com

Thursday 16 November 2023

Leveraging IBM Cloud for electronic design automation (EDA) workloads

Leveraging IBM Cloud for electronic design automation (EDA) workloads

Electronic design automation (EDA) is a market segment consisting of software, hardware and services with the goal of assisting in the definition, planning, design, implementation, verification and subsequent manufacturing of semiconductor devices (or chips). The primary providers of this service are semiconductor foundries or fabs.

While EDA solutions are not directly involved in the manufacture of chips, they play a critical role in three ways:

1. EDA tools are used to design and validate the semiconductor manufacturing process to ensure it delivers the required performance and density.

2. EDA tools are used to verify that a design will meet all the manufacturing process requirements. This area of focus is known as design for manufacturability (DFM).

3. After the chip is manufactured, there is a growing requirement to monitor the device’s performance from post-manufacturing test to deployment in the field. This third application is referred to as silicon lifecycle management (SLM).

The increasing demands for computers to match the higher fidelity simulations and modeling workloads, more competition, and the need to bring products to market faster mean that EDA HPC environments are continually growing in scale. Organizations are looking at the best leveraging technologies—such as accelerators, containerization and hybrid cloud—to gain a competitive computing edge.

EDA software and integrated circuit design

Electronic design automation (EDA) software plays a pivotal role in shaping and validating cutting-edge semiconductor chips, optimizing their manufacturing processes, and ensuring that advancements in performance and density are consistently achieved with unwavering reliability.

The expenses associated with acquiring and maintaining the necessary computing environments, tools and IT expertise to operate EDA tools present a significant barrier for startups and small businesses seeking entry into this market. Simultaneously, these costs remain a crucial concern for established firms implementing EDA designs. Chip designers and manufacturers find themselves under immense pressure to usher in new chip generations that exhibit increased density, reliability, and efficiency and adhere to strict timelines—a pivotal factor for achieving success.

This challenge in integrated circuit (IC) design and manufacturing can be visualized as a triangular opportunity space, as depicted in the figure below:

Leveraging IBM Cloud for electronic design automation (EDA) workloads

EDA industry challenges


In the electronic design automation (EDA) space, design opportunities revolve around three key resources:

1. Compute infrastructure
2. Designers
3. EDA licenses

These resources delineate the designer’s available opportunity space.

For design businesses, the key challenge is selecting projects that promise the highest potential for business growth and profitability. To expand these opportunities, an increase in the pool of designers, licenses or compute infrastructure is essential.

Compute infrastructure

To expand computing infrastructure on-premises, extensive planning and time are required for the purchase, installation, configuration and utilization of compute resources. Delays may occur due to compute market bottlenecks, the authorization of new data center resources, and the construction of electrical, cooling and power infrastructure. Even for large companies with substantial on-premises data centers, quickly meeting the demand for expanded data centers necessitates external assistance.

Designers

The second factor limiting realizable opportunities is the pool of designers. It is a challenge to hire designers, being highly skilled engineers, swiftly. The educational foundation required for design takes years to establish, and it often takes a year or more to effectively integrate new designers into existing design teams. This makes designers the most inelastic component on the left side of the figure, constraining business opportunities.

EDA licenses

Lastly, EDA licenses are usually governed by contracts specifying the permissible quantities and types of tools a firm can use. While large enterprises may explore enterprise licensing contracts that are virtually unlimited, they are prohibitively expensive for startups and small to medium-sized design firms.

Leveraging cloud computing to speed time to market


Firms (irrespective of size) aiming to expand their business horizons and gain a competitive edge in terms of time-to-market can strategically leverage two key elements to enhance opportunities: cloud computing and new EDA licensing.

The advent of cloud computing enables the rapid expansion of compute infrastructure by provisioning or creating new infrastructure in public clouds within minutes, in contrast to the months required for internal infrastructure development. EDA software companies have also started offering peak-licensing models, enabling design houses to utilize EDA software in the cloud under shorter terms than traditional licensing contracts.

Leveraging cloud computing and new EDA licensing models, most design houses can significantly expand their business opportunity horizons. The availability of designers remains an inelastic resource; however, firms can enhance their design productivity by harnessing the automation advantages offered by EDA software and cloud computing infrastructure provisioning.

How IBM is leading EDA


In conjunction with IBM’s deep expertise in semiconductor technology, data, and artificial intelligence (AI), our broad EDA and HPC product portfolio encompasses systems, storage, AI, grid, and scalable job management. Our award-winning storage and data transfer products—such as IBM Storage Scale, IBM Spectrum LSF and IBM Aspera—have been tightly integrated to deliver high-performance parallel storage solutions and large-scale job management across multiple clusters and computing resources.

IBM Cloud EDA infrastructure offers foundry-secure patterns and environments, supported by a single point of ownership. EDA firms can quickly derive value from secure, high-performance, user-friendly cloud solutions built on top of IBM’s industry-leading cloud storage and job management infrastructure.

In the coming months, IBM technical leaders will publish a white paper highlighting our unique capability to offer optimized IBM public cloud infrastructure for EDA workloads, serving both large and small enterprise customers.

Source: ibm.com

Thursday 9 November 2023

IBM

AI assistants optimize automation with API-based agents

AI assistants optimize automation with API-based agents

Generative AI-powered assistants are transforming businesses through intelligent conversational interfaces. Capable of understanding and generating human-like responses and content, these assistants are revolutionizing the way humans and machines collaborate. Large Language Models (LLMs) are at the heart of this new disruption. LLMs are trained on vast amounts of data and can be used across endless applications. They can be easily tuned for specific enterprise use cases with a few training examples.

We are witnessing a new phase of evolution as AI assistants go beyond conversations and learn how to harness tools through agents that could invoke Application Programming Interfaces (APIs) to achieve specific business goals. Tasks that used to take hours can now be completed in minutes by orchestrating a large catalog of reusable agents. Moreover, these agents can be composed together to automate complex workflows.

AI assistants can use API-based agents to help knowledge workers with mundane tasks such as creating job descriptions, pulling reports in HR systems, sourcing candidates and more. For instance, an HR manager can ask an AI assistant to create a job description for a new role, and the assistant can generate a detailed job description that meets the company’s requirements. Similarly, a recruiter can ask an AI assistant to source candidates for a job opening, and the assistant can provide a list of qualified candidates from various sources. With AI assistants, knowledge workers can save time and focus on more complex and creative problems.

Automation builders can also harness the power of AI assistants to create automations quickly and easily. While it may sound like a riddle, AI assistants employ generative AI to automate the very process of automation. This makes building agents easier and faster. There are two essential steps in building agents for business automation: training and enriching agents for target use cases and orchestrating a catalog of multiple agents.

Training and enriching API-based agents for target use cases


APIs are the backbone of AI agents. Building API-based agents is a complex task that involves interacting with a user in a conversational manner, identifying the APIs that are needed to achieve a user goal, asking questions to gather the required arguments for the API, detecting the information provided by the user that is needed when invoking the API, enriching the APIs with sample utterances and generating responses based on API return values. This process can take hours for an experienced developer. However, LLMs can automate these steps. This enables builders to train and enrich APIs more quickly for specific tasks.

Assume Bob, an automation builder, wants to create API-based agents to help company sellers retrieve a list of target customers. The first step is to import the “Retrieve My Customers” API into the AI assistant. However, to make this automation available as an agent, Bob needs to take several manual and tedious steps which include training the natural language classifier with sample utterances. With the help of LLMs, AI assistants can automatically generate sample training utterances from OpenAPI specifications. This capability can significantly reduce the required manual effort. Once the foundation model is fine-tuned for semantic understanding, it can better understand business users’ prompts and intents. Bob can still review and manipulate the generated questions using a human-in-the-loop approach.

Soon, the process of building agents will be fully automated by identifying APIs, filling slots and enriching APIs. This will reduce the time it takes to create automation, reduce technical barriers and improve reusable agent catalogs.

Orchestrating multiple agents to automate complex workflows


Building automation flows that use multiple APIs can be technically complex and time consuming. To connect multiple APIs, it’s important to identify, sequence and invoke the right set of APIs to achieve a specific business goal. AI assistants use LLMs and planning techniques to simplify this process and reduce technical barriers. LLMs can work as a powerful recommendation system, suggesting the most suitable APIs based on usage, similarities and descriptions.

Builders must align the inputs and outputs of multiple APIs to compose multi-agent automations, which is a tedious and error-prone process. LLM-driven API mapping automates this alignment process based on API attributes and documentation. This makes it easier for automation builders to reuse existing APIs from large catalogs without manual intervention.

Now, suppose our automation builder, Bob, wants to create a more complex multi-API automation that allows sellers to retrieve a list of customers and subsequently generate a list of personalized product recommendations. After importing and enriching the “Retrieve My Customers” API agent, the LLM-infused sequencing feature can automatically recommend the “Generate Product Recommendations” API. This means Bob does not have to sift through each API individually to discover the most suitable one from the extensive catalog of agents.

In addition, each API contains fields of varying data types. The source API provides output fields that represent information about a set of customers. The target API presents input fields that also represent customer information. Typically, Bob would have to spend time manually mapping each field in the target APIs to a corresponding field in the source API. This tedious effort would be exacerbated as the number of source APIs and target fields increase. The API mapping service can generate a set of alignment suggestions which Bob can quickly review, edit and save.

IBM watsonx Orchestrate uses a combination of AI models (including LLMs) to simplify the process of building AI agents through API enrichment, sequencing and mapping recommendations. In the new phase of evolution, AI assistants will be able to sequence multiple APIs at runtime to achieve business goals defined by non-technical knowledge workers, which further democratizes automation. By leveraging AI assistants, enterprises can accelerate their automation initiatives and redeploy significant resources toward more value-generating areas.

Source: ibm.com

Wednesday 8 November 2023

IBM and Microsoft work together to bring Maximo Application Suite onto Azure

IBM and Microsoft work together to bring Maximo Application Suite onto Azure

IBM and Microsoft believe in providing you with the power of choice so you can leverage the industry-leading asset management capabilities of Maximo Application Suite (MAS) deployed and operating on Azure. MAS is available from IBM or through the Microsoft Azure Marketplace. When you choose to invest in MAS, you’re not just purchasing a license; you’re embracing an opportunity to tailor your asset management journey precisely to your unique needs and aspirations.

The next step in this exciting journey? Choosing IBM Consulting Maximo ManagePlus and leverage IBM experts to manage your MAS application and Azure cloud infrastructure.

Maximo ManagePlus highlights 


IBM Consulting Maximo ManagePlus provides:

◉ Network planning, build, and operation

◉ IBM Consulting provisions, manages, and operates your MAS environment on Azure

Key Benefits of running Maximo Application Suite on the Microsoft Azure Platform


Flexibility

◉ Customize the MAS application per the needs of your enterprise.

◉ Execute a MAS upgrade schedule that continuously aligns with your enterprise’s needs..

Reduce Costs

◉ Eliminate the need to hire, train, and retain MAS application and management expertise. Why incur the cost and risk developing in-house MAS expertise when you can access expertise that supports numerous MAS clients?

◉ Eliminate the need to hire, train, and retain underlying Azure expertise. Why incur the cost and risk developing in-house cloud management expertise when you can leverage the expertise of a proven Azure managed services team that supports hundreds of clients operating thousands of hybrid cloud workloads?

◉ Eliminate the need to deploy and evolve the tooling of standard operating procedures (SOPs) required to exceed your enterprise MAS availability of SLO (service level objectives).

Improve agility and productivity

  • Provide your enterprise MAS users with  24×7 access to MAS application experts.
  • Increase resolution of L2/L3 service requests.
  • Increase your enterprise application support without expanding your application support team.
  • Run your enterprise MAS application  on underlying Azure infrastructure  to continuously monitor and patch. This ensures that your hybrid cloud operations team can focus on other Azure workloads.
  • Increase your Azure workload without expanding your cloud operations team.
  • Streamline time-consuming work.

Improve stability

  • Manage people, process and tools with a proven track record to exceed MAS application and infrastructure availability SLO. Can your enterprise afford unplanned asset management system downtime?
  • Improve the stability of SLOs & service level agreements (SLAs).

Improve cyber security

  • Subscribe to Maximo ManagePlus to improve application and infrastructure patching cadence and automate endpoint security configuration.

Improve margin

  • Subscribe to Maximo ManagePlus to yield savings that the client can invest in new projects that target reducing bottom-line expenses or create/evolve brand differentiating, revenue generating products and services.

Act now with Maximo Application Suite on Azure


IBM and Microsoft have come together to give you the option of deploying Maximo Application Suite on Azure. Subscribing to Maximo ManagePlus allows you to leapfrog tedious MAS installation and management. Contact us today to embark on a secure and future-proof path for your business.

Source: ibm.com

Tuesday 7 November 2023

Child support systems modernization: The time is now

Child support systems modernization: The time is now

The majority of today’s child support systems are dated, first-generation systems that are now more than 25 years old. These systems need modernization to meet the evolving needs of children and families in the 21st century. With more than 20% of families and children supported by these systems, the impact is significant.

Today’s constituents are interested in engaging with services using modern, consumer-friendly technologies, platforms and devices. Families also expect interactive experiences that drive outcomes tailored to their needs.


The existing systems were simply not designed to provide a family-centric approach to service delivery and do not have the capabilities or features needed to realize that type of approach. Markedly, most states are at least in the planning stages of modernizing these systems.

To respond to the new requirements and expectations of state-provided child support services, these systems must:

  • Empower families to get help when, where and how they need it, including via virtual and real-time communication mechanisms.
  • Provide quick and transparent services to ease family stress and frustration in times of need.
  • Be intuitive and user-friendly in order to reduce inefficiencies and manual effort for both families and caseworkers.
  • Automate routine tasks to allow caseworkers to provide more personalized services and build relationships with families.
  • Empower caseworkers with online tools to collaborate with colleagues and access knowledge repositories.

Though there are many challenges when it comes to child support system modernization, with a proven approach to mainframe application modernization, it is possible to facilitate technical application migrations while maintaining consistency in business functions.

State-level challenges and requirements


In addition to the shift in the way families engage with technology, there are several other factors driving states to pursue modernization. States are:

  • Envisioning a holistic, family-focused model for service delivery that is personal, customized, and collaborative (rather than the “one-size-fits-all” process that ends up not fitting anyone very well).
  • Providing more time for caseworkers to engage and collaborate with families by freeing them from inefficient system interfaces and processes.
  • Utilizing the capabilities of modern technology stacks rather than continuing to use outdated and limited applications developed on existing systems.
  • Upgrading their systems to leverage widely available and competitively priced technology skillsets rather than paying for the scarce, expensive skills required for existing system support.

Each state’s child support system is different, but core system requirements make them similar in many ways. These requirements include initiating new cases, providing location and establishment services, enforcing orders and handling financial transactions mandated by the Office of Child Support Services (OCSS). Despite these commonalities, every child support organization is distinct, and each needs a tailored modernization approach that supports its vision, addresses its specific system challenges and understands the reality of its issues “on the ground.”

The core systems technology landscape for each state could be an existing mainframe system with varying degrees of maturity, portability, reliability and scalability. States’ existing investments in modernizing and enhancing ancillary supportive technologies (such as document management, web portals, mobile applications, data warehouses and location services) could negate the need for certain system requirements as part of the child support system modernization initiative. This, along with overall maturity of state systems, the need for uninterrupted service requirements, state budget, timing, staffing and other factors, mandates a holistic modernization effort. The “one-size-fits-all” approach to child support system modernization doesn’t work any better than it does for the child support process itself.

Accelerated Incremental Mainframe Modernization (AIMM)


Existing applications are generally complex and expensive to maintain, which limits business agility and makes any attempt to rebuild, refactor or integrate the system a risk. IBM Consulting™ has experienced these challenges across the breadth of industry applications and has formed a generalized approach to modernizing existing applications (particularly those running on traditional mainframe systems) that addresses the challenges in an automated fashion.

IBM’s Accelerated Incremental Mainframe Modernization (AIMM) approach focuses on modernization with a lens toward incremental application transformation rather than code translation. Instead of a single, risky, “big-bang” application and infrastructure update, AIMM focuses on incremental, business-data-domain-centric initiatives that deliver immediate value while enabling a development approach and an ecosystem of processes and tools for continued incremental optimization. AIMM facilitates an end-to-end approach to mainframe modernization that places particular focus on a journey of coexistence. It begins with mapping business and technology processes alongside their IT ecosystems. This approach is distinct from the common code-conversion approach and ensures that both existing and new systems are in lockstep, seamlessly delivering the needed business functions to caseworkers and families while incrementally migrating to the new digital core system. Eventually, new platforms replace existing systems entirely and final cutover is accomplished with minimal to no disruption to the users. The diagram below illustrates an end-to-end flow of processes using AIMM to accomplish mainframe modernization.

Child support systems modernization: The time is now

The AIMM approach is bolstered by IBM’s industry-leading methodology, tools and assets, including:

  • IBM Garage Methodology: IBM’s engagement and operating methodology (which brings together industry best practices, technical expertise, systems knowledge, client collaboration and partnership, cloud service providers and the consulting teams) uses a design strategy with iterative creation and launch processes to deliver accelerated outcomes and time to value.
  • IBM Consulting Cloud Accelerator (ICCA): This approach accelerates cloud adoption by creating a “wave plan” for migrating and modernizing workloads to cloud platforms. ICCA integrates and orchestrates a wide range of migration tools across IBM’s assets and products, open source components and third-party tools to take a workload from its original platform to a cloud destination.
  • Asset Analysis Renovation Catalyst (AARC): This tool automatically extracts business rules from application source code and generates a knowledge model that can be deployed into a rules engine such as Operational Decision Manager (ODM).
  • Operational Decision Manager: This business rule management system enables automated responses to real-time data by applying automated decisions based on business rules. ODM enables business users to analyze, automate and govern rules-based business decisions by developing, deploying and maintaining operational system decision logic.

The AIMM approach meets states where they are in their modernization journeys by analyzing existing code, extracting business rules, converting code to modern languages and deploying applications to any cloud platform. With its proven tools and processes, IBM Consulting applies and manages technology to help states realize their business goals and deliver value faster. 

Hybrid cloud architectures for balanced transformation


Technology decisions should be based on a thoughtful balance of cost, capability and sustainability to enable successful outcomes and attain program or project goals. The adoption of cloud technology has gained significant traction for child support agencies with mainframe systems due to its support for operational efficiences and its ability to facilitate on-demand innovation. Cloud computing offers many benefits, including cost savings, scalability, security, accessibility, software maintenance, data backup, disaster recovery, and advanced data and AI capabilities.

A technology architecture based on hybrid cloud (a blend of on-premises and cloud service provider capabilities) enables agencies to advance their missions with the advantages of cloud while still benefiting from their mainframe investments. Considering a hybrid cloud architecture allows states to prioritize transformation of problematic applications while retaining the portions of their existing child support systems that meet constituent needs.

Technical patterns for cloud migration


For states modernizing their existing systems, IBM recommends the combination of one or more of the following technical patterns to achieve cloud migration and realize the benefits of the modern cloud platform:

  • Pattern 1: Migration to cloud with middleware emulator. With this approach, an agency’s systems are migrated to a cloud platform with minimal to no code alterations. The integration of middleware emulators minimizes the need for code changes and ensures smooth functionality during the migration process.
  • Pattern 2: Migration to cloud with code refactoring. This approach couples the migration of systems to a cloud environment with necessary code modifications for optimal performance and alignment with cloud architectures. IBM has a broad ecosystem of partners who specialize in using automated tools to make most code modifications.
  • Pattern 3: Re-architect and modernize with microservices. This strategy encompasses re-architecting systems with the adoption of microservices-based information delivery channels. This approach modernizes systems in cloud-based architectures which enable efficient communication between the microservices.
  • Pattern 4: Cloud data migration for analytics and insights. This strategy focuses on transferring existing data to the cloud and facilitating generation of advanced data analytics and insights, a key feature of modernized systems.

Maintaining business functions across technical migrations


For all of the previously mentioned technical patterns, maintaining business functions while modernizing the technical components supporting them is essential to migration and modernization success. The following graphic shows an example of maintaining business functions while changing the underlying technology using Pattern 2 (migration to cloud with code refactoring).

Child support systems modernization: The time is now

In this example, IBM retains business functionality (shown in blue) while changing the underlying technology (shown in yellow). This incremental approach is critical to maintaining an agency’s ability to continue providing effective services to families and children while technology transformation takes place. 

IBM Consulting has solidified these accelerators to help clients migrate to cloud, and IBM continues to work with all major hyperscalers to accelerate application modernization. Our goals are to achieve automation to the extent it is feasible, reduce risk of application changes, create and deploy secure APIs, and reduce the need for specialized skills to accomplish state application migrations.

In the ever-evolving landscape of child support services, transformation is key to efficiently and effectively supporting caseworkers, children and their families.

By prioritizing family-focused models, embracing modern technology, respecting state uniqueness and harnessing the power of hybrid cloud architectures, modernized child support systems can pave the way for a brighter future for children and families in need of support.

IBM is proud to be a trusted partner for many states as they modernize child support systems to look after the welfare of our nation’s families.

Source: ibm.com

Saturday 4 November 2023

Apache Kafka and Apache Flink: An open-source match made in heaven

Apache Kafka and Apache Flink: An open-source match made in heaven

In the age of constant digital transformation, organizations should strategize ways to increase their pace of business to keep up with — and ideally surpass — their competition. Customers are moving quickly, and it is becoming difficult to keep up with their dynamic demands. As a result, I see access to real-time data as a necessary foundation for building business agility and enhancing decision making.

Stream processing is at the core of real-time data. It allows your business to ingest continuous data streams as they happen and bring them to the forefront for analysis, enabling you to keep up with constant changes.

Apache Kafka and Apache Flink working together


Anyone who is familiar with the stream processing ecosystem is familiar with Apache Kafka: the de-facto enterprise standard for open-source event streaming. Apache Kafka boasts many strong capabilities, such as delivering a high throughput and maintaining a high fault tolerance in the case of application failure.

Apache Kafka streams get data to where it needs to go, but these capabilities are not maximized when Apache Kafka is deployed in isolation. If you are using Apache Kafka today, Apache Flink should be a crucial piece of your technology stack to ensure you’re extracting what you need from your real-time data.

With the combination of Apache Flink and Apache Kafka, the open-source event streaming possibilities become exponential. Apache Flink creates low latency by allowing you to respond quickly and accurately to the increasing business need for timely action. Coupled together, the ability to generate real-time automation and insights is at your fingertips.

With Apache Kafka, you get a raw stream of events from everything that is happening within your business. However, not all of it is necessarily actionable and some get stuck in queues or big data batch processing. This is where Apache Flink comes into play: you go from raw events to working with relevant events. Additionally, Apache Flink contextualizes your data by detecting patterns, enabling you to understand how things happen alongside each other. This is key because events have a shelf-life, and processing historical data might negate their value. Consider working with events that represent flight delays: they require immediate action, and processing these events too late will surely result in some very unhappy customers.

Apache Kafka acts as a sort of firehose of events, communicating what is always going on within your business. The combination of this event firehose with pattern detection — powered by Apache Flink — hits the sweet spot: once you detect the relevant pattern, your next response can be just as quick. Captivate your customers by making the right offer at the right time, reinforce their positive behavior, or even make better decisions in your supply chain — just to name a few examples of the extensive functionality you get when you use Apache Flink alongside Apache Kafka.

Innovating on Apache Flink: Apache Flink for all


Now that we’ve established the relevancy of Apache Kafka and Apache Flink working together, you might be wondering: who can leverage this technology and work with events? Today, it’s normally developers. However, progress can be slow as you wait for savvy developers with intense workloads. Moreover, costs are always an important consideration: businesses can’t afford to invest in every possible opportunity without evidence of added value. To add to the complexity, there is a shortage of finding the right people with the right skills to take on development or data science projects.

This is why it’s important to empower more business professionals to benefit from events. When you make it easier to work with events, other users like analysts and data engineers can start gaining real-time insights and work with datasets when it matters most. As a result, you reduce the skills barrier and increase your speed of data processing by preventing important information from getting stuck in a data warehouse.  

IBM’s approach to event streaming and stream processing applications innovates on Apache Flink’s capabilities and creates an open and composable solution to address these large-scale industry concerns. Apache Flink will work with any Apache Kafka and IBM’s technology builds on what customers already have, avoiding vendor lock-in. With Apache Kafka as the industry standard for event distribution, IBM took the lead and adopted Apache Flink as the go-to for event processing — making the most of this match made in heaven.

Imagine if you could have a continuous view of your events with the freedom to experiment on automations. In this spirit, IBM introduced IBM Event Automation with an intuitive, easy to use, no code format that enables users with little to no training in SQL, java, or python to leverage events, no matter their role. Eileen Lowry, VP of Product Management for IBM Automation, Integration Software, touches on the innovation that IBM is doing with Apache Flink:

“We realize investing in event-driven architecture projects can be a considerable commitment, but we also know how necessary they are for businesses to be competitive. We’ve seen them get stuck all-together due to costs and skills constrains. Knowing this, we designed IBM Event Automation to make event processing easy with a no-code approach to Apache Flink It gives you the ability to quickly test new ideas, reuse events to expand into new use cases, and help accelerate your time to value.”

This user interface not only brings Apache Flink to anyone that can add business value, but it also allows for experimentation that has the potential to drive innovation speed up your data analytics and data pipelines. A user can configure events from streaming data and get feedback directly from the tool: pause, change, aggregate, press play, and test your solutions against data immediately. Imagine the innovation that can come from this, such as improving your e-commerce models or maintaining real-time quality control in your products.

This user interface not only brings Apache Flink to anyone that can add business value, but it also allows for experimentation that has the potential to drive innovation speed up your data analytics and data pipelines. A user can configure events from streaming data and get feedback directly from the tool: pause, change, aggregate, press play, and test your solutions against data immediately. Imagine the innovation that can come from this, such as improving your e-commerce models or maintaining real-time quality control in your products.

Source: ibm.com

Thursday 2 November 2023

How IBM and AWS are partnering to deliver the promise of AI for business

IBM and AWS, AI for business, IBM Exam, IBM Exam Prep, IBM Exam Preparation, IBM Exam Tutorial and Materials, IBM Guides, IBM Learning

In today’s digital age where data stands as a prized asset, generative AI serves as the transformative tool to mine its potential. According to a survey by the MIT Sloan Management Review, nearly 85% of executives believe generative AI will enable their companies to obtain or sustain a competitive advantage. The global AI market is projected to grow to USD 190 billion by 2025, increasing at a compound annual growth rate (CAGR) of 36.62% from 2022, according to Markets and Markets. Businesses globally recognize the power of generative AI and are eager to harness data and AI for unmatched growth, sustainable operations, streamlining and pioneering innovation. In this quest, IBM and AWS have forged a strategic alliance, aiming to transition AI’s business potential from mere talk to tangible action.

Adopting AI in business at scale is not without its challenges, including data privacy concerns, integration complexities and the need for skilled personnel. Scaling AI in business presents unique challenges:

1. Data accessibility: Fragmented and siloed data stifle advancement. Gartner highlights that businesses lose an estimated USD 15 million annually due to inadequate data access.

2. Integration and financial constraints: Merging AI with current systems is intricate. Forrester indicates that 40% of companies face this obstacle. Concurrently, McKinsey points out high expenses limit AI integration in 23% of organizations.

3. Ethical and regulatory barriers: Upholding AI ethics is pivotal. A significant 34% of companies express concerns over fairness, with regulatory hurdles intensifying the landscape.

The AWS-IBM partnership is a symphony of strengths


The collaboration between IBM and AWS is more than just a tactical alliance; it’s a symphony of strengths. IBM, a pioneer in data analytics and AI, offers watsonx.data, among other technologies, that makes possible to seamlessly access and ingest massive sets of structured and unstructured data. AWS, on the other hand, provides robust, scalable cloud infrastructure. By combining IBM’s advanced data and AI capabilities powered by Watsonx platform with AWS’s unparalleled cloud services, the partnership aims to create an ecosystem where businesses can seamlessly integrate AI into their operations.

Real-world Business Solutions


The real value of any technology is measured by its impact on real-world problems. IBM and AWS partnership focuses on delivering solutions in areas like:

Supply chain optimization with AI-infused Planning Analytics

IBM Planning Analytics on AWS offers a powerful platform for supply chain optimization, blending IBM’s analytics expertise with AWS’s cloud capabilities. One of the largest children clothing retailer in the US utilizes this solution to streamline its complex supply chain. Real-time data analytics helps in quick decision-making, while advanced forecasting algorithms predict product demand across diverse locations. The retailer uses these insights to optimize inventory levels, reduce costs and enhance efficiency. AWS’s scalable infrastructure allows for rapid, large-scale implementation, ensuring agility and data security. Overall, this partnership enables the retailer to make data-driven decisions, improve supply chain efficiency and ultimately boost customer satisfaction, all in a secure and scalable cloud environment.

Infuses AI to transform business operations

DB2 PureScale on AWS provides a scalable and resilient database solution that’s well-suited for AI-driven applications. By taking advantage of AWS’s robust cloud infrastructure, PureScale ensures high availability and fault tolerance, critical for businesses operating around the clock. A leading insurance player in Japan leverages this technology to infuse AI into their operations. Real-time analytics on customer data — made possible by DB2’s high-speed processing on AWS — allows the company to offer personalized insurance packages. AI algorithms sift through large datasets to identify fraud risks and streamline claims processing, improving both efficiency and customer satisfaction. AWS’s secure and scalable environment ensures data integrity while providing the computational power needed for advanced analytics. Thus, DB2 PureScale on AWS equips this insurance company to innovate and make data-driven decisions rapidly, maintaining a competitive edge in a saturated market.

Modernizing data warehouse with IBM watsonx.data

Modernizing a data warehouse with IBM watsonx.data on AWS offers businesses a transformative approach to managing data across various sources and formats. The platform provides an intelligent, self-service data ecosystem that enhances data governance, quality and usability. By migrating to watsonx.data on AWS, companies can break down data silos and enable real-time analytics, which is crucial for timely decision-making. One of largest asset management company has executed a pilot using machine learning capabilities to further allow for predictive analytics, uncovering trends and patterns that traditional methods might miss. One of the standout features for this company is its seamless integration with existing IT infrastructure, reducing both costs and the complexity of migrating from legacy systems. Whether you’re looking to streamline operations, improve customer experiences, or unlock new revenue streams, IBM watsonx.data on AWS lays the foundation for a smarter, more agile approach to data management and analytics.

As AI continues to evolve, this partnership is committed to staying ahead of the curve by continuously updating its offerings, investing in joint development and providing businesses with tools that are both cutting-edge and practical.

The IBM-AWS partnership is not just a win-win for the companies involved; it’s a win for businesses across sectors. By combining IBM’s prowess in data analytics and AI with AWS’s robust cloud infrastructure, the alliance is breaking down barriers to AI adoption, offering scalable solutions, and enabling businesses to leverage AI for tangible results.

Get ready to harness the power of AI for your business


Explore how the IBM-AWS partnership can offer you tailored solutions that drive results. Join us at AWS re:Invent 2023 from November 27 to December 1 in Las Vegas, Nevada. At booth #930, IBM will spotlight its advancements in AI, demonstrating how we assist clients to scale AI workloads using our comprehensive generative AI stack swiftly and responsibly. This event offers a firsthand look into IBM’s transformative solutions that are reshaping industries. Engage with our experts, partake in live demos, and explore tailor-made solutions for your business needs.

Source: ibm.com