Wednesday 31 January 2024

The blueprint for a modern data center

The blueprint for a modern data center

Part one of this series examined the dynamic forces behind data center retransformation. Now, we’ll look at designing the modern data center, exploring the role of advanced technologies, such as AI and containerization, in the quest for resiliency and sustainability. 

Strategize and plan for differentiation 


As a leader, you need to know where you want to take the business—understanding the trajectory of your organization is nonnegotiable. However, your perspective must be grounded in reality; meaning, you must understand the limitations of your current environment: 

  • Where does it currently fall short? 
  • Where have you already identified room for improvement? 
  • Where can you make meaningful changes? 

The answers to these questions can help guide your transformation plan with achievable goals. Changes will likely come through outdated, disjointed technology systems and inefficient, costly processes. 

It’s also important to clearly define the role of new advanced technologies in your new environment. AI and containerization are not just buzzwords. They are powerful tools and methods that can help you drive efficiency, agility and resiliency throughout the data center and into everything the business does through digital means. 

  • AI offers insights and the ability to automate capabilities intelligently. 
  • More than half of surveyed organizations seek to increase value and revenue through the adoption of generative AI.
  • Containerization gives you greater flexibility and growth potential in deploying applications in any hybrid cloud environment that you can envision (and need). 
  • Gartner predicts that 15% of on-premises production workloads will run in containers by 2026.

Don’t just keep pace with these advancements in technology. Use them and build advanced data-driven processes around them to enable your entire organization to create a distinct, competitive edge. 

When you have your objectives and mission clearly defined, you can develop a strategic plan that incorporates advanced technologies, processes—and even partner services—to achieve the outcomes you’ve outlined. This plan should not just solely address the immediate needs but should also be flexible and adaptable to overcome future challenges and use future opportunities. It should also include resiliency and sustainability as core tenants to make sure you can keep growing and transforming for years to come. 

Use data and automated precision to produce results


What is automated precision? When you can integrate data, tools and processes to manage and optimize various aspects of a data center, automated precision becomes about harnessing technology to run operations with: 

  • High accuracy 
  • Minimal intervention 
  • Consistent performance 
  • Predictable outcomes 

The global data center automation market was valued at $7.6 billion in 2022. It is expected to reach $20.9 billion by 2030.

Automation will play a pivotal role in transforming the data center, where scale and complexity will outpace the ability of humans to keep everything running smoothly. For you as a business leader, this means pivoting from manual methods to a more streamlined, technology-driven and data-enabled approach. 

AI is a critical component in this advancement toward automated precision. Able to analyze large data sets, predict trends and make informed decisions, AI’s role will be to transform mere automation into intelligent operation. 34% of surveyed organizations plan to invest the most in AI and machine learning (ML) over the year. 

When you apply AI-enabled automated precision to your data center, you can: 

  • Handle repetitive, time-consuming tasks with unmatched speed and accuracy 
  • Free up human resources for more strategic initiatives that can’t be automated 
  • Identify anomalies swiftly and predict failures before they occur 
  • Intelligently distribute resources based on real-time demand 
  • Detect and mitigate threats more effectively compared to traditional methods 
  • Optimize power usage and reduce waste in alignment with your sustainability goals 

Chart a course for resilience and sustainability 


The evolution of the data center helps position your organization at the forefront of technological advancement and at the heart of sustainable business practices. Adopting a modern data center that embraces AI, edge computing and containerization can help ensure that your organization will emerge as a dynamic, efficient and environmentally conscious business. 

IBM® and VMware can help you design the data center of the future—one that uses automation, hyperconvergence and cloud technologies to support high performance, reliability and sustainability. With offerings for security, compliance, analytics and containerization, IBM and VMware can ensure that your modern data center meets your business goals

Source: ibm.com

Tuesday 30 January 2024

MRO spare parts optimization

MRO spare parts optimization

Many managers in asset-intensive industries like energy, utilities or process manufacturing, perform a delicate high-wire act when managing inventory. Finding the right balance becomes crucial for helping ensure the success of maintenance, repair and operations (MRO) initiatives, specifically the spare parts that support them.

What’s at stake?


Whether MRO processes address preventive maintenance, service failures or shutdown overhauls, the wanted results are the same: deliver increased service levels, function safely and sustainably, operate efficiently and reduce unplanned and costly downtime.


A recent report shows a significant increase in the cost of manufacturing downtime from 2021 to 2022, with Fortune Global 500 companies now losing 11% of their yearly turnover which amounts to nearly USD 1.5 trillion, up from USD 864 billion in 2019 to 2020.

Another study revealed:

MRO spare parts optimization

The swinging pendulum


The MRO spare parts inventory varies depending on the industry and equipment, ranging from specific items to encompassing more basic supplies. These supplies include everything from large infrastructure items such as turbines, generators, transformers and heating, ventilation and air conditioning systems to smaller items like gears, grease and mops. Many asset-intensive businesses are prioritizing inventory optimization due to the pressures of complying with growing industry 4.0 regulations, undergoing digital transformation and the need for cost-cutting.

Over time, inventory managers have tested different approaches to determine the best fit for their organizations.

For many years, businesses favored just-in-time operations as the most logical approach for managing inventory and minimizing holding costs. However, recent disruptions in the global supply chain, due to the pandemic and geo-specific issues, have caught many off guard.

If Operations needed a spare part that wasn’t readily available, it often resulted in equipment downtime or costly stockouts. Even in the past, this method frequently led to additional expenses for expediting or shipping, along with concerns about the quality of parts.

Considering that IDC surveyed 37% of companies that manage spare parts inventory by using spreadsheets, email, shared folders or an uncertain approach, it becomes evident that this practice carries more risk than it might seem. Unless your demand forecasting is accurate, adopting a reactive approach might prove less efficient.

Now, consider the just-in-case approach. Some managers choose to stock excess spare parts due to past encounters with delays and other negative consequences. Maintaining safety stock is beneficial but excessive inventory incurs costs and demands significant time for management. When assets lack criticality and priority assignments, there is a risk of accumulating unnecessary parts that might become obsolete on the shelves. This, in turn, initiates a continuous cycle of spending on inventory reduction efforts.

The benefits of finding the right balance


So, when considering the drawbacks of both just-in-time and just-in-case approaches, the goal becomes finding the ideal balance that helps ensure you have the right materials to sustain business operations while providing your teams with what they need at the right time.

This isn’t purely theoretical. There are quantifiable benefits to balancing the dynamics of MRO spares and material demand. Many organizations lack the in-house resources or knowledge to run these necessary procedures but those capable of doing so report:

  • A 50% reduction in unplanned downtime associated with parts.
  • A 40% reduction in inventory costs.
  • A 35% decrease in maintenance budgets.
  • A 25% increase in service levels.

How to achieve the right balance


The short answer: collect, analyze and act on data in real-time to unlock immediate value across your operations. Is it easier said than done? It can be if you rely on a spreadsheet, physical asset counts or solely on condition monitoring.

Consider these questions:

  1. Do you have a platform that combines statistical analyses, prescriptive analytics and optimization algorithms?
  2. Can you segment data from all your systems like enterprise asset management, enterprise resource planning (ERP), customer relationship management and sensor technology, by using key parameters like cost, criticality, usage, actual lead times and more?
  3. If you rely on transactional ERP systems, are you missing the critical analytics and reporting capabilities you need and recognized gaps in SAP for asset-intensive industries?
  4. Can you review historical data modules?
  5. Do you perform baseline analyses that look at inventory value based on average price, inherited items and other criteria?
  6. Can you conduct what-if scenarios to visualize your options?
  7. Do you have purpose-built algorithms to improve intermittent and variable demand forecasting?
  8. Can you group and prioritize work by using work queues and monitor progress by organizational areas and data sets?

While artificial intelligence (AI) already factors into many inventory managers’ plans, it’s worth keeping an eye on the latest iteration of the technology. Generative AI has the potential to deliver powerful support in key data areas:

  • Master data cleansing to reduce duplications and flag outliers.
  • Master data enrichment to enhance categorization and materials attributes.
  • Master data quality to improve scoring, prioritization and automated validation of data.

Explore optimization


IBM MRO Inventory Optimization can help optimize your MRO inventory by providing an accurate, detailed picture of performance. The flexible, scalable solution is a fully managed cloud inventory platform designed to collect, store and analyze vast amounts of MRO inventory stock data by using an array of advanced algorithms and analytics to intelligently optimize MRO inventories.

IBM supply chain consulting services can also strengthen supply chain management, helping clients build resilient, agile and sustainable end-to-end supply chains for the future.

Source: ibm.com

Saturday 27 January 2024

Decoding the future: unravelling the intricacies of Hybrid Cloud Mesh versus service mesh

Decoding the future: unravelling the intricacies of Hybrid Cloud Mesh versus service mesh

Hybrid Cloud Mesh, which is generally available now, is revolutionizing application connectivity across hybrid multicloud environments. Let’s draw a comparison between Hybrid Cloud Mesh and a typical service mesh to better understand the nuances of these essential components in the realm of modern enterprise connectivity. This comparison deserves merit because both the solutions are focused on application-centric connectivity albeit in a different manner.

Before we delve into the comparison, let’s briefly revisit the concept of Hybrid Cloud Mesh and a typical service mesh.

Decoding the future: unravelling the intricacies of Hybrid Cloud Mesh versus service mesh

Hybrid Cloud Mesh


Hybrid Cloud Mesh is a modern application-centric connectivity solution that is simple, secure, scalable and seamless. It creates a secure network overlay for applications distributed across cloud, edge and on-prem and holistically tackles the challenges posed by distribution of services across hybrid multicloud. 

Decoding the future: unravelling the intricacies of Hybrid Cloud Mesh versus service mesh

Service mesh


A service mesh is a configurable infrastructure layer that manages all connectivity requirements between microservices. It manages service-to-service communication, providing essential functionalities such as service discovery, load balancing, encryption and authentication. 

Language libraries for connectivity have partial and inconsistent implementation of traffic management features and are difficult to maintain and upgrade. A service mesh eliminates such libraries and allows services to focus on their business logic and communicate with other services without adding any connectivity logic in situ. 

Hybrid Cloud Mesh versus service mesh: a comparative analysis 


1. Scope of connectivity

  • Hybrid Cloud Mesh: Goes beyond microservices within a containerized application, extending connectivity to applications regardless whether they’re form-factor deployed across on-premises, public cloud and private cloud infrastructure. Its scope encompasses a broader range of deployment scenarios. 
  • Service mesh: Primarily focuses on managing communication between microservices within a containerized environment. Although many service meshes have started looking outward, enabling multi-cluster any-to-any connectivity. 

2. Multicloud connectivity

  • Hybrid Cloud Mesh: Seamlessly connects applications across hybrid multicloud environments, offering a unified solution for organizations with diverse cloud infrastructures. 
  • Service mesh: Typically designed for applications deployed within a specific cloud or on-premises environment. Many service meshes have expanded scope to multicloud connectivity, but they are not fully optimized for it.  

3. Traffic engineering capabilities

  • Hybrid Cloud Mesh: Utilizes waypoints to support path optimization for cost, latency, bandwidth and others,. enhancing application performance and security. 
  • Service mesh: No traffic engineering capabilities. Primarily focuses on internal traffic management within the microservices architecture. 

4. Connectivity intent expression

  • Hybrid Cloud Mesh: Allows users to express connectivity intent through the UI or CLI, providing an intuitive, user-friendly experience with minimal learning curve.  
  • Service mesh: Requires users to implement complex communication patterns in the sidecar proxy using configuration files. Service mesh operations entail complexity and demand a substantial learning curve. The expert team responsible for managing the service mesh must consistently invest time and effort to effectively utilize and maintain the service mesh. Due to steep learning curve and tooling required (such as integration with CI/CD pipeline or day 0 to day 2 automation), service meshes can be adopted only after customers gain a certain scale to make the investment worthwhile.   

5. Management and control plane

  • Hybrid Cloud Mesh: Employs a centralized SaaS-based management and control plane, enhancing ease of use and providing observability. Users interact with the mesh manager through a user-friendly UI or CLI. 
  • Service mesh: Often utilizes decentralized management, with control planes distributed across the microservices, requiring coordination for effective administration. 

6. Integration with gateways

  • Hybrid Cloud Mesh: Integrates with various gateways, promoting adaptability to diverse use cases and future-ready for upcoming gateway technologies. 
  • Service mesh: Primarily relies on sidecar proxies for communication between microservices within the same cluster. Typically features on the proxy are extended to meet requirements.  

7. Application discovery

  • Hybrid Cloud Mesh: Mesh manager continuously discovers and updates multicloud deployment infrastructure, automating the discovery of deployed applications and services. 
  • Service mesh: Typically relies on service registration and discovery mechanisms within the containerized environment. 

8. Dynamic network maintenance

  • Hybrid Cloud Mesh: Automatically adapts to dynamic changes in workload placement or environment, enabling resilient and reliable connectivity at scale without manual intervention. 
  • Service mesh: Usually, the day 2 burden to manage a service mesh connecting applications across multicloud is huge due to complexity of operations required to manage dynamic infrastructure changes. It requires manual adjustments to accommodate changes in microservices deployed in a multicloud environment. There’s significant effort in keeping it running such as—upgrades, security fixes and others apart from infrastructure changes. This takes away a lot of time and very little time is left for implementing new features.  

9. Infrastructure overhead

  • Hybrid Cloud Mesh: Data plane is composed of a limited number of edge-gateways and waypoints.
  • Service mesh: Significant overhead due to sidecar proxy architecture which requires 1 sidecar-proxy for every workload.  

10. Multitenancy

  • Hybrid Cloud Mesh: Offers robust multitenancy; moreover, subtenants can be created to maintain separation between different departments or verticals within an organization. 
  • Service mesh: May lack the capability to accommodate multitenancy or a subtenant architecture. Few customers may create a separate service mesh per cluster to keep the tenants separate. Hence, they must deploy and manage their own gateways to connect various service meshes.  

Take the next step with Hybrid Cloud Mesh


We are excited to showcase a tech preview of Hybrid Cloud Mesh supporting the use of Red Hat® Service Interconnect gateways simplifying application connectivity and security across platforms, clusters and clouds. Red Hat Service Interconnect, announced 23 May 2023 at Red Hat Summit, creates connections between services, applications and workloads across hybrid necessary environments. 

We’re just getting started on our journey building comprehensive hybrid multicloud automation solutions for the enterprise. Hybrid Cloud Mesh is not just a network solution; it’s engineered to be a transformative force that empowers businesses to derive maximum value from modern application architecture, enabling hybrid cloud adoption and revolutionizing how multicloud environments are utilized. We hope you join us on the journey. 

Source: ibm.com

Thursday 25 January 2024

Procurement transformation: Why excellence matters

Procurement transformation: Why excellence matters

Procurement departments tend to be less visible to many stakeholders than sales, operations or even finance departments, but the impact they have on everything from the bottom line to product quality and service delivery shouldn’t be overlooked, which is why “procurement excellence” is a worthy pursuit.

Optimizing the procurement function can help deliver successful business outcomes, such as:

  • 12–20% in sourcing/demand management savings
  • 95% in improvement in compliance
  • 30% in incremental spend under management
  • 35% in reduction in contract value leakage

Transforming procurement


If your organization isn’t seeing these kinds of numbers, you might be a great candidate for transformation. The first step in the journey is to understand where you are, then use that information to determine where you want to be. It’s difficult sometimes to carve out the required time and thought process to establish the path to excellence, especially when you are focused on supporting the demands expected from others of procurement in a complex organization—every single day, that is.

That’s when partnering with procurement advisory services can help guide your team to procurement excellence, and help you deliver contributions to your enterprise’s goals: increase profitability, enhance service outcomes that can enable revenue growth, build customer satisfaction and ensuring suppliers deliver high-quality goods and services.

Assessing the current environment


One of the most important steps is to review the procurement department’s mission and current role within the organization. A solid assessment delves into the overall procurement lifecycle.

How is procurement linking stakeholders’ needs with the suppliers who can deliver the right capabilities? How are teams organized to align with key objectives?

How are they delivering in these areas:

  • Business planning, stakeholder liaison
  • Sourcing operations and analytics
  • Supplier performance management and compliance
  • Purchasing operations, including requisition processing, and other critical activities

High-performing organizations keep senior management informed on a regular basis to demonstrate the value of procurement to the business. Are you providing reporting, such as:

  • Preferred suppliers use percentage, by category and through an organizational viewSourcing effectiveness, planned events on schedule, savings achieved and a highlight of new suppliers from new major deals
  • Supplier performance metrics, including an executive summary, top 10 performers (and bottom performers, too)

Designing the future state


Attainable goals that build on the outcome of a procurement functional assessment can provide a roadmap to the future. A procurement advisor helps map across the gaps, typically resulting in a plan to advance category management, to develop a target operating model and surfacing other key opportunities where there is need to mobilize action.

At IBM, with operations in more than 170 countries involving over 13,000 suppliers, this wasn’t an easy task. Using Design Thinking, among other methodologies, the procurement team was able to define the vision for its future state and scale a solution that would work. Transforming procurement with intelligent workflows has enabled its procurement professionals to onboard suppliers 10 times faster and conduct pricing analysis in 10 minutes as compared with 2 days. AI, automation, blockchain and more enabled the transformation.

In fact, more procurement organizations are thinking about incorporating generative AI as part of their future plans to drive faster, more accurate decision-making, lower operating costs and improve resiliency.

The importance of the supplier ecosystem


Suppliers are one of the enterprise’s most valuable elements, so it’s important to partner well in this arena. Preferred supplier programs with well-negotiated contracts and prices, in sync with key business strategies, can enhance the delivery of goods and services—and, as important, customer satisfaction.

A thorough review of your entire supplier ecosystem, from vendor selection and source-to-pay to benchmarking pricing, can provide critical insights into the maturity of your ecosystem. Measuring compliance is a critical KPI for accountability. Are internal stakeholders following policies or going outside the system? Are suppliers meeting contract requirements, service levels and sustainability goals?

Driving stakeholder satisfaction


Many companies closely track their net promoter scores (NPS) for both positive and negative trends. Even in the B2B space, customers demand that transactions are intuitive, easily fulfilled and within corporate policy. In many ways, an optimized procurement function sets the stage for the high-quality, on-time delivery of goods and services that exceed expectations and can generate a 30–50% improvement in NPS.

In addition to external-facing stakeholders, it’s important for the procurement team to build and maintain internal relationships. This not only helps in requirements gathering, but also cultivates trust across the organization. A model that encourages close interaction with “category experts” (who often want to handle their own procurement) can help manage sourcing, contracting and measuring success, while maintaining visibility, accountability and spending discipline.

Procurement excellence works


In a recent Expert Insights report, “Smart procurement made smarter,” IBM Institute for Business Value found that an integrated operating model fosters procurement decisions based on real-time data through advanced analytics and predictive algorithms. Top-performing organizations have achieved 52% lower costs to order materials and services, as well as 60% lower costs to process accounts payable—and more than half significantly outperformed competitors in revenue growth and effectiveness over three years.

Source: ibm.com

Tuesday 23 January 2024

The dynamic forces behind data center re-transformation

The dynamic forces behind data center re-transformation

Data centers are undergoing significant evolution. Initially, they were massive, centralized facilities that were complex, costly and difficult to replicate or restore. Now, advancements in hardware and software as well as increased focus on sustainability are driving rapid transformation.

Catalysts and conundrums


A dramatic shift in development and operations is making data centers more agile and cost-effective. These changes are driven by the following:

  • market changes and customer requirements prompting organizations to decentralize and diversify their data storage and processing functions; 
  • policy and regulatory requirements such as data sovereignty, affecting data center operations and locations; 
  • the push to reduce complexity, risk and cost with the widespread adoption of cloud and hybrid infrastructure; 
  • the pressure for improved sustainability with greener, more energy-efficient practices; and 
  • AI adoption, both to improve operations and increase performance requirements. 

IDC predicts a surge in AI-enabled automation, reducing the need for human operations intervention by 70% by 2027​​. 

However, AI is also a disruptor, necessitating advanced infrastructure to meet data-intensive computational demands. This isn’t to suggest that disruption is a negative attribute. It’s quite the opposite. If embraced, disruption can push the organization to new heights and lead to tremendous outcomes. 

Embrace change and innovation 


The data center of the future is ripe for further growth and transformation. As-a-service models are expected to become more prevalent, with IDC forecasting that 65% of tech buyers will prioritize these models by 2026​​. This shift echoes the response to economic pressures and the need to fill talent gaps in IT operations. 

The growing importance of edge computing, driven by the need for faster data processing and reduced latency, also reshapes data center architecture. Gartner predicts data center teams will adopt cloud principles even for on-premises infrastructure to help optimize performance, management and cost. 

Sustainability will remain a key focus, with Gartner noting that 87% of business leaders plan to invest more in sustainability in the coming years​. This commitment is critical in reducing the environmental impact data centers will have, aligning their transformation with broader global efforts to combat climate change. This will allow organizations to demonstrate their commitment to ESG efforts as consumers look to differentiate between those that take real action and those that are simply greenwashing for marketing purposes. 

Envision the data center of tomorrow


Data centers will continue transitioning from the monolithic configurations from yesteryear to become agile, high-powered, AI-driven, sustainable ecosystems distributed globally. They will mirror the broader evolution of technology, business and society, sometimes even leading the charge to a new frontier. The data center of the future will be at the center of innovation, efficiency and environmental responsibility, playing a critical role in shaping a sustainable digital world.

Source: ibm.com

Saturday 20 January 2024

The advantages of holistic facilities management

The advantages of holistic facilities management

Beyond the traditional challenges of today’s markets, many organizations must also address the challenges of real estate and facilities management. These issues include managing rising real estate costs, increasing lease rates, new sustainability goals and under-utilized hybrid work environments.

Successfully managing your facilities can directly impact employee productivity and customer satisfaction. Facilities management plays a role in the ongoing cost of the operation of your facility, its lifespan and its energy consumption, as well as how you optimize the use of your facility.

Too many businesses look at facilities from individual silos. Operations is concerned with facility maintenance, finance evaluates buildings from a cost perspective and business management fixates on employee productivity. But who in your organization is responsible for breaking down silos across teams and looking at your facilities from a more holistic viewpoint?

Individual solutions have advantages. They’re smaller to implement, don’t require consensus, are generally lower in acquisition cost and are usually easier to understand because they address only one problem. But point solutions don’t leverage common data or connect across your entire business operation, let alone produce a company-wide, easily digestible report on the state of your facility.

Building your own integrated environment with custom linkages between individual solutions presents new challenges. Developing and maintaining custom linkages often comes with a high price tag and usually doesn’t provide competitive benefits. In some cases, it even limits your ability to leverage any new capabilities that may be offered by your solution providers.

Successful organizations want the best of both: an integrated solution they can buy (not build) that can be implemented in logical phases to address their most pressing challenges and then grow over time. They want to treat their facilities as more of a strategic investment rather than just a necessary expense. So, there has been a growing adoption of more holistic real estate and facility management solutions. This approach not only enables successful facilities management processes but also builds an extensive single source of truth for your  real estate and facilities portfolio. This data repository is  invaluable for audits, acquisitions and divestitures, capital planning, lease management and evaluations.

IBM TRIRIGA Essential offerings provide a best-of-breed, focused solution that is part of the holistic architecture of TRIRIGA. This solution includes space management and reservations, capital planning and Facility Condition Assessment (FCA), or service and workorder management. These Essential packages offer the entry price of a focused solution with the capability to expand seamless management across the entire facility lifecycle.

These TRIRIGA Essential offerings enable your company to start with a ‘point like’ solution based on a holistic foundation. These solutions can easily be extended to other areas of facilities management while leveraging and expanding the shared facilities management data repository.

Source: ibm.com

Tuesday 16 January 2024

How IBM process mining unleashed new efficiencies in BoB-Cardif Life

How IBM process mining unleashed new efficiencies in BoB-Cardif Life

Enterprises now recognize the importance of leveraging innovative technologies to drive digital transformation and achieve cost efficiency. However, a lack of precise top-level planning and a narrow focus on technology without integration with business needs led to significant investments with suboptimal results for many companies.

The path of digital transformation is fraught with challenges. How do organizations avoid the digital risks of ‘technology misuse’ and achieve efficient innovation that ‘technology promotes production’? As an insurance company integrating technology into the new development landscape, BoB-Cardif Life Insurance Co., Ltd (BoB-Cardif Life) partnered with IBM. Using  IBM Client Engineering methods and introducing AI-powered process mining product IBM Process Mining. This partnership establishes a benchmark for digital transformation in the insurance industry, promoting innovation and achieving cost efficiency through AI-powered business automation.

How IBM process mining unleashed new efficiencies in BoB-Cardif Life

“Co-create with IBM”


Aligned with the original guidance from the National Financial Regulatory Administration of China on digital transformation and driven by strategic development needs, BoB-Cardif Life formulated a five-year plan. The plan aimed to address digital transformation along with a digital construction plan for 2023. A Digital Transformation Office was established to oversee the company’s digital transformation efforts. Emphasizing ‘Lean Processes’ as a foundational initiative in the company’s 2023 plan, the objective is to enhance overall process management capabilities by constructing specific process-related methodologies, systems, and tools. This initiative aims to avoid unnecessary costs, prevent detours, and lay a solid foundation for continuous digitalization.

The Digital Transformation Office of BoB-Cardif Life analyzed the current processes using IBM Client Engineering’s innovative approach. They identified the lack of an end-to-end perspective and the need to enrich process management methods. The office sought a practical solution for process optimization and automation to address the aforementioned issues. The goal is to quickly identify and resolve process issues, foster continuous resolution of cross-departmental and cross-system collaboration challenges. While optimizing management mechanisms for cost reduction, efficiency enhancement, and risk mitigation.

As a result, BoB-Cardif Life launched the “Super Automation” project through the long-term development of a series of process-related platforms, technologies, and products. With goal of achieving cross-departmental and cross-system business process collaboration to reshape and optimize. The objective is to reduce labor costs effectively, improve operational efficiency, optimize management mechanisms, and mitigate business risks. 

BoB-Cardif Life regards digitalization as a technological innovation and elevates it to the level of strategy and values with foresight and responsibility. Centering on customer value, the company prioritizes long-term development and innovation. Decisions are made based on objective data to uphold the rationality of technology applications and prevent the misuse of digital technology. Based on this premise following a  a rigorous vendor selection process, BoB-Cardif Life chose IBM as its partner to explore digital innovation and transformation in processes.

A representative from BoB-Cardif Life said: “In the field of process development, IBM and BoB-Cardif Life can be described as like-minded. In our early discussions with IBM, we found that IBM’s proposal of a super-automated process solution aimed at creating value for customers aligns closely with ours. IBM provides us with a technology platform that meets our business needs and offers an innovative methodology to translate technology into business value efficiently. This capability is precisely what we need for the long-term development of our digital transformation journey.”

BoB-Cardif Life Process Mining Platform is the first of its kind in the insurance industry in China


In 2023, the BoB-Cardif Life’s Super Automation Project (Phase 1) established a process mining platform that uses IBM Process Mining. It introduced Robotic Process Automation (RPA) in pilot scenarios to swiftly enhance process efficiency and quality, integrating system resources cost-effectively and breaking data silos. The initial focus was optimizing three key processes—claims, underwriting, and a complex approval process—building on the outcomes of the preceding two projects and fully implementing and refining the fundamental process methodology.

Process mining is known as “enterprise nuclear magnetic resonance (NMR),” which can quickly find the “root cause” of enterprise process and identify effective “prescription.” IBM Process Mining can use data from enterprise resource planning (ERP), customer relationship management (CRM), and other business systems. It does this to achieve comprehensive enterprise processes through artificial intelligence and data-driven insights while accurately identifying the problems caused by inefficiencies. IBM Process Mining can also prioritize automation improvements based on the severity of the issue and the expected ROI, driving continuous improvement of processes by triggering corrective actions or generating RPA bots. With this technology, businesses can make faster, more informed process improvement decisions, which can be expected to reduce processing time by 70% and achieve ROI of up to 176%.

At present, the BoB-Cardif Life Super Automation (Phase I) project is progressing well with the following achievements in its phase:

How IBM process mining unleashed new efficiencies in BoB-Cardif Life

◉ Methodology: BoB-Cardif Life established methods for assessing current business status, optimizing processes, and implementing digital technology. The follow-up work mode is determined by applying and verifying these methods for an iterative and continuous process optimization—a critical element in the digital transformation journey.

◉ Business Application: BoB-Cardif Life enhanced transparency in the claim process for key stakeholders, offering visibility into the complete processes and their frequency for the first time. Through this process, the company identified business anomalies, a low proportion of approval processes without investigation, excessive duplicate work, and an uneven distribution of employees handling complex cases. To address specific process breakpoints, the company proposed targeted digital technology and process management measures. With the aim of focusing all investments on customer value, eliminating waste, and significantly enhancing the automation of claims. The company also recognized data issues and introduced measures to ensure continuous and effective data quality oversight.

Through this project, BoB-Cardif Life has identified targeted measures to optimize specific business processes. Along with using RPA support, implementing these measures also entails harnessing the capabilities of digital technologies, including an AI-based knowledge repository and AI-based classification of claim cases.

IBM believes that the digital transformation of an enterprise is a long-term and iterative process that requires business values, methodologies, technology platforms, and tools that align with their business strategy. IBM has a wealth of experience and technical expertise in enterprise digital transformation, focusing on customer value. It offers an  agile, and efficient approach, through methodologies such as IBM Client Engineering, and leading practical  technology platforms and products. This  enables customers to make technology, platform, and path choices that meet business needs, overcome obstacles, and improve the effectiveness of digital transformation. “We are very honored and proud to be able to help BoB-Cardif Life promote innovation, reduce costs and increase efficiency with AI-enabled business automation, and set a benchmark for the digital transformation in the insurance industry,” said the IBM project leader.

Source: ibm.com

Saturday 13 January 2024

IBM Cloud patterns: Private wireless network on IBM Cloud Satellite

IBM Cloud patterns: Private wireless network on IBM Cloud Satellite

Communication service providers (CSPs) are teaming up with hyperscalers to offer private wireless networks that are owned and fully managed by whoever builds them. A private wireless network (PWN) provides the same kind of connectivity as public wireless networks, and enterprises must weigh the pros and cons of private wireless networks using 5G technology. It is important to understand some of the common patterns, as well as the management aspects of such networks, including the components needed to create PWNs and their architecture. 

Components of a private wireless network 


There are many components that constitute a private wireless network, but these are the key required elements: 

  • Spectrum refers to the radio frequencies that are used for communications (and are allocated by the state). Choosing a licensed or unlicensed radio spectrum depends on coverage requirements, interference conditions and regulatory compliances. 
  • Network core is the control center that provides packet switching, policy control, authentication, session management, access and mobility function, routing and network management. 
  • Radio Access Network (RAN) includes Open RAN-based virtual centralized unit (vCU), virtual distribution unit (vDU), radio unit (RU), gateway and other equipment that enables wireless communication between end-user devices and the network core that is reliable, efficient and seamless. 

When building out a private wireless network there is a need for supplementary elements such as orchestration, service assurance, management, monitoring and security. These components play a pivotal role in ensuring the seamless operation, optimization and security of the private wireless network, contributing to its resilience and high-performance capabilities. 

There are essentially three types of companies that are involved in building these solutions: 

  • Telecommunications (telco) vendors like Nokia, Ericsson, Samsung and Mavenir 
  • Hyperscalers like IBM, AWS, Azure and GCP 
  • Communications service providers like AT&T, Verizon and TELUS 

Telecommunication vendors partner with cloud providers to deliver private wireless networks for enterprises either directly or through communications service providers or systems integrator (SI) partners.

IBM Cloud patterns: Private wireless network on IBM Cloud Satellite
Figure 1. Network-related components of private wireless network 

Figure 1 shows the networking components that a CSP would require so they can assist customers in configuring a private wireless network. These are standard network-related components that CSPs are used to deploying. Historically, many of these elements were constructed using dedicated hardware. However, a significant shift has occurred, with an increasing number of these components transitioning to a cloud-native, software-based paradigm: the virtualized (in most cases containerized) radio access network that includes related components like a vCU, a vDU and the virtualized network core. 

A representative container-based vDU architecture is shown as an example in Figure 2 to give the reader an idea of how software has replaced dedicated purpose-built hardware in network components. Figure 2 also shows the components in 5G core service-based architecture. All components are either virtualized or containerized. This is important because it has provided hyperscalers a huge opportunity in a space dominated by telcos. 

IBM Cloud patterns: Private wireless network on IBM Cloud Satellite
Figure 2 vDU architecture and 5G core components 

The other half of the solution is related to software components, which cloud providers bring to augment and complete the solution. They can range from automation scripts, to orchestration, to service assurance and even monitoring and logging. The most important aspect is that the hyperscaler provides the cloud platform to host the solution and the corresponding cloud services. These are shown in Figure 3 as beige-colored boxes.

IBM Cloud patterns: Private wireless network on IBM Cloud Satellite
Figure 3. Software-related support functions of private wireless network 

Benefits of private wireless networks 


These standalone networks can be deployed in industrial settings such as manufacturing shop floors, logistical warehouses, large hospitals, sports stadiums and enterprise campuses. Enterprises do not have to cater to the constraints of a public network. Instead, they can deploy and have control of a private network that meets their exact needs. 

Figure 4 depicts an exemplar architecture where the private wireless network comprising the 5G RAN and 5G core, along with the edge applications, are deployed on a hyperscaler platform. One of the main requirements is that the PWN be deployed on-premises. That topology fits the IBM Cloud Satellite® paradigm wherein the on-premises location can be an IBM Cloud Satellite location that is connected to an IBM Cloud® region via a secure IBM Cloud Satellite link. This design could serve enterprise customers who are looking at proximity to required 5G network components, which offer low-latency and high-throughput capability.

IBM Cloud patterns: Private wireless network on IBM Cloud Satellite
Figure 4. Block diagram of a private wireless network

This architecture pattern fulfills the requirement of serving the end users, devices and applications closer to where they are. To support real-time, mission-critical use cases, user plane applications are placed in the IBM Cloud Satellite location. These satellite locations could be an on-prem edge datacenter or any public cloud location.

Architecting private wireless network in IBM Cloud 


By implementing a private 5G network, large enterprises can bring a customized 5G network to their facility and keep it secure while using its high-speed, high-bandwidth and low-latency features. Like most networking solutions, there are two parts to this: the “managed from” components and the “managed to” components. The “managed from” components are hosted in the partnering hyperscalers cloud, and the “managed to” components are typically on the enterprise’s premises with secure high-speed connectivity between those two locations. In our example, IBM Cloud hosts the “managed from” components while the satellite location is running the “managed to” components.

Figure 5 shows a pattern where the private wireless network is deployed on-premises on the left (at a “remote” IBM Cloud Satellite location). The workloads running in that satellite location can access supporting services hosted in the IBM Cloud on the right. The network components provided by a telco are shown in blue. Most of those are deployed in the satellite location, but some telco management systems can run in the cloud and could potentially offer multitenancy capability to support multiple enterprises. 

IBM Cloud patterns: Private wireless network on IBM Cloud Satellite
Figure 5. Private wireless network architecture in IBM Cloud Satellite on-premises location 

Imagine a manufacturing plant that has different kinds of movable and stationary robots and other programmable devices operating within the plant. The company could choose to employ a private wireless network because that will speed up the inter-communications needed to operate the devices while keeping things secure.

In such a scenario, the manufacturing plant could be configured as a remote IBM Cloud Satellite location running the required workloads and cloud-related components on-premises. More importantly, the network connectivity required at the location would be provided by the PWN. This setup could be duplicated in the company’s other manufacturing units or their partner suppliers, across the state or country. Each unit would have its own PWN and be configured as an IBM Cloud Satellite location. All these satellite locations would be managed from an IBM Cloud region. 

There is a master control plane running in IBM Cloud that monitors all the Satellite locations and provides centralized logging and security services as part of the managed services. IBM Cloud Site Reliability Engineers take care of all system upgrades and patching. We mentioned that the satellite link between the IBM Cloud Satellite location and IBM Cloud is a security-rich TLS 1.3 tunnel. Enterprises could also make use of IBM’s Direct Link service to connect. You will notice all the network connections described in this topology are secure.

IBM’s Cloud Pak for Network Automation (CP4NA), together with an Element Management System from a telco, would provide service orchestration and service assurance functions. IBM Cloud would provide monitoring and logging services along with identity access management for accessing the cloud environment. Additional network monitoring services could be provided by the CSP. This underscores the need for the cloud provider to work closely with the telco vendor. From an enterprise perspective, the enterprise user interface serves to mask the complexity, offering a unified interface for streamlined management, provisioning of services and comprehensive monitoring and logging. This user interface acts as a singular control hub, simplifying operations and enhancing overall efficiency. 

Enterprises that want to set up a private wireless network can do so on their own or outsource it to a hyperscaler like IBM. Hyperscalers end up partnering with a CSP to build and manage these networks. It is very important to make sure that the network is built on a flexible platform and can be scaled in the future. Though enterprises should be cognizant of costs, more enterprises are choosing PWNs because they provide a secure and reliable alternative to a public network. 

Source: ibm.com

Thursday 11 January 2024

Breaking down the advantages and disadvantages of artificial intelligence

Breaking down the advantages and disadvantages of artificial intelligence

Artificial intelligence (AI) refers to the convergent fields of computer and data science focused on building machines with human intelligence to perform tasks that would previously have required a human being. For example, learning, reasoning, problem-solving, perception, language understanding and more. Instead of relying on explicit instructions from a programmer, AI systems can learn from data, allowing them to handle complex problems (as well as simple-but-repetitive tasks) and improve over time.

Today’s AI technology has a range of use cases across various industries; businesses use AI to minimize human error, reduce high costs of operations, provide real-time data insights and improve the customer experience, among many other applications. As such, it represents a significant shift in the way we approach computing, creating systems that can improve workflows and enhance elements of everyday life.

But even with the myriad benefits of AI, it does have noteworthy disadvantages when compared to traditional programming methods. AI development and deployment can come with data privacy concerns, job displacements and cybersecurity risks, not to mention the massive technical undertaking of ensuring AI systems behave as intended.

In this article, we’ll discuss how AI technology functions and lay out the advantages and disadvantages of artificial intelligence as they compare to traditional computing methods.

What is artificial intelligence and how does it work?


AI operates on three fundamental components: data, algorithms and computing power. 

  • Data: AI systems learn and make decisions based on data, and they require large quantities of data to train effectively, especially in the case of machine learning (ML) models. Data is often divided into three categories: training data (helps the model learn), validation data (tunes the model) and test data (assesses the model’s performance). For optimal performance, AI models should receive data from a diverse datasets (e.g., text, images, audio and more), which enables the system to generalize its learning to new, unseen data.
  • Algorithms: Algorithms are the sets of rules AI systems use to process data and make decisions. The category of AI algorithms includes ML algorithms, which learn and make predictions and decisions without explicit programming. AI can also work from deep learning algorithms, a subset of ML that uses multi-layered artificial neural networks (ANNs)—hence the “deep” descriptor—to model high-level abstractions within big data infrastructures. And reinforcement learning algorithms enable an agent to learn behavior by performing functions and receiving punishments and rewards based on their correctness, iteratively adjusting the model until it’s fully trained.
  • Computing power: AI algorithms often necessitate significant computing resources to process such large quantities of data and run complex algorithms, especially in the case of deep learning. Many organizations rely on specialized hardware, like graphic processing units (GPUs), to streamline these processes.

AI systems also tend to fall in two broad categories:

  • Artificial Narrow Intelligence, also called narrow AI or weak AI, performs specific tasks like image or voice recognition. Virtual assistants like Apple’s Siri, Amazon’s Alexa, IBM watsonx and even OpenAI’s ChatGPT are examples of narrow AI systems.
  • Artificial General Intelligence (AGI), or Strong AI, can perform any intellectual task a human can perform; it can understand, learn, adapt and work from knowledge across domains. AGI, however, is still just a theoretical concept.

How does traditional programming work?


Unlike AI programming, traditional programming requires the programmer to write explicit instructions for the computer to follow in every possible scenario; the computer then executes the instructions to solve a problem or perform a task. It’s a deterministic approach, akin to a recipe, where the computer executes step-by-step instructions to achieve the desired result.

The traditional approach is well-suited for clearly defined problems with a limited number of possible outcomes, but it’s often impossible to write rules for every single scenario when tasks are complex or demand human-like perception (as in image recognition, natural language processing, etc.). This is where AI programming offers a clear edge over rules-based programming methods.

What are the pros and cons of AI (compared to traditional computing)?


The real-world potential of AI is immense. Applications of AI include diagnosing diseases, personalizing social media feeds, executing sophisticated data analyses for weather modeling and powering the chatbots that handle our customer support requests. AI-powered robots can even assemble cars and minimize radiation from wildfires.

As with any technology, there are advantages and disadvantages of AI, when compared to traditional programing technologies. Aside from foundational differences in how they function, AI and traditional programming also differ significantly in terms of programmer control, data handling, scalability and availability.

  • Control and transparency: Traditional programming offers developers full control over the logic and behavior of software, allowing for precise customization and predictable, consistent outcomes. And if a program doesn’t behave as expected, developers can trace back through the codebase to identify and correct the issue. AI systems, particularly complex models like deep neural networks, can be hard to control and interpret. They often work like “black boxes,” where the input and output are known, but the process the model uses to get from one to the other is unclear. This lack of transparency can be problematic in industries that prioritize process and decision-making explainability (like healthcare and finance).
  • Learning and data handling: Traditional programming is rigid; it relies on structured data to execute programs and typically struggles to process unstructured data. In order to “teach” a program new information, the programmer must manually add new data or adjust processes. Traditionally coded programs also struggle with independent iteration. In other words, they may not be able to accommodate unforeseen scenarios without explicit programming for those cases. Because AI systems learn from vast amounts of data, they’re better suited for processing unstructured data like images, videos and natural language text. AI systems can also learn continually from new data and experiences (as in machine learning), allowing them to improve their performance over time and making them especially useful in dynamic environments where the best possible solution can evolve over time.
  • Stability and scalability: Traditional programming is stable. Once a program is written and debugged, it will perform operations the exact same way, every single time. However, the stability of rules-based programs comes at the expense of scalability. Because traditional programs can only learn through explicit programming interventions, they require programmers to write code at scale in order to scale up operations. This process can prove unmanageable, if not impossible, for many organizations. AI programs offer more scalability than traditional programs but with less stability. The automation and continuous learning features of AI-based programs enable developers to scale processes quickly and with relative ease, representing one of the key advantages of ai. However, the improvisational nature of AI systems means that programs may not always provide consistent, appropriate responses.
  • Efficiency and availability: Rules-based computer programs can provide 24/7 availability, but sometimes only if they have human workers to operate them around the clock.

AI technologies can run 24/7 without human intervention so that business operations can run continuously. Another of the benefits of artificial intelligence is that AI systems can automate boring or repetitive jobs (like data entry), freeing up employees’ bandwidth for higher-value work tasks and lowering the company’s payroll costs. It’s worth mentioning, however, that automation can have significant job loss implications for the workforce. For instance, some companies have transitioned to using digital assistants to triage employee reports, instead of delegating such tasks to a human resources department. Organizations will need to find ways to incorporate their existing workforce into new workflows enabled by productivity gains from the incorporation of AI into operations.

Maximize the advantages of artificial intelligence with IBM Watson


Omdia projects that the global AI market will be worth USD 200 billion by 2028.¹ That means businesses should expect dependency on AI technologies to increase, with the complexity of enterprise IT systems increasing in kind. But with the IBM watsonx™ AI and data platform, organizations have a powerful tool in their toolbox for scaling AI.

IBM watsonx enables teams to manage data sources, accelerate responsible AI workflows, and easily deploy and embed AI across the business—all on one place. watsonx offers a range of advanced features, including comprehensive workload management and real-time data monitoring, designed to help you scale and accelerate AI-powered IT infrastructures with trusted data across the enterprise.

Though not without its complications, the use of AI represents an opportunity for businesses to keep pace with an increasingly complex and dynamic world by meeting it with sophisticated technologies that can handle that complexity.

Source: ibm.com

Saturday 6 January 2024

A brief history of cryptography: Sending secret messages throughout time

A brief history of cryptography: Sending secret messages throughout time

Derived from the Greek words for “hidden writing,” cryptography is the science of obscuring transmitted information so that only the intended recipient can interpret it. Since the days of antiquity, the practice of sending secret messages has been common across almost all major civilizations. In modern times, cryptography has become a critical lynchpin of cybersecurity. From securing everyday personal messages and the authentication of digital signatures to protecting payment information for online shopping and even guarding top-secret government data and communications—cryptography makes digital privacy possible.

While the practice dates back thousands of years, the use of cryptography and the broader field of cryptanalysis are still considered relatively young, having made tremendous advancements in only the last 100 years. Coinciding with the invention of modern computing in the 19th century, the dawn of the digital age also heralded the birth of modern cryptography. As a critical means of establishing digital trust, mathematicians, computer scientists and cryptographers began developing modern cryptographic techniques and cryptosystems to protect critical user data from hackers, cybercriminals, and prying eyes. 

Most cryptosystems begin with an unencrypted message known as plaintext, which is then encrypted into an indecipherable code known as ciphertext using one or more encryption keys. This ciphertext is then transmitted to a recipient. If the ciphertext is intercepted and the encryption algorithm is strong, the ciphertext will be useless to any unauthorized eavesdroppers because they won’t be able to break the code. The intended recipient, however, will easily be able to decipher the text, assuming they have the correct decryption key.  

In this article, we’ll look back at the history and evolution of cryptography.

Ancient cryptography


1900 BC: One of the first implementations of cryptography was found in the use of non-standard hieroglyphs carved into the wall of a tomb from the Old Kingdom of Egypt. 

1500 BC: Clay tablets found in Mesopotamia contained enciphered writing believed to be secret recipes for ceramic glazes—what might be considered to be trade secrets in today’s parlance. 

650 BC: Ancient Spartans used an early transposition cipher to scramble the order of the letters in their military communications. The process works by writing a message on a piece of leather wrapped around a hexagonal staff of wood known as a scytale. When the strip is wound around a correctly sized scytale, the letters line up to form a coherent message; however, when the strip is unwound, the message is reduced to ciphertext. In the scytale system, the specific size of the scytale can be thought of as a private key. 

100-44 BC: To share secure communications within the Roman army, Julius Caesar is credited for using what has come to be called the Caesar Cipher, a substitution cipher wherein each letter of the plaintext is replaced by a different letter determined by moving a set number of letters either forward or backward within the Latin alphabet. In this symmetric key cryptosystem, the specific steps and direction of the letter transposition is the private key.

Medieval cryptography


800: Arab mathematician Al-Kindi invented the frequency analysis technique for cipher breaking, representing one of the most monumental breakthroughs in cryptanalysis. Frequency analysis uses linguistic data—such as the frequency of certain letters or letter pairings, parts of speech and sentence construction—to reverse engineer private decryption keys. Frequency analysis techniques can be used to expedite brute-force attacks in which codebreakers attempt to methodically decrypt encoded messages by systematically applying potential keys in hopes of eventually finding the correct one. Monoalphabetic substitution ciphers that use only one alphabet are particularly susceptible to frequency analysis, especially if the private key is short and weak. Al-Kandi’s writings also covered cryptanalysis techniques for polyalphabetic ciphers, which replace plaintext with ciphertext from multiple alphabets for an added layer of security far less vulnerable to frequency analysis. 

1467: Considered the father of modern cryptography, Leon Battista Alberti’s work most clearly explored the use of ciphers incorporating multiple alphabets, known as polyphonic cryptosystems, as the middle age’s strongest form of encryption. 

1500: Although actually published by Giovan Battista Bellaso, the Vigenère Cipher was misattributed to French cryptologist Blaise de Vigenère and is considered the landmark polyphonic cipher of the 16th century. While Vigenère did not invent the Vigenère Cipher, he did create a stronger autokey cipher in 1586. 

Modern cryptography 


1913: The outbreak of World War I at the beginning of the 20th century saw a steep increase in both cryptology for military communications, as well as cryptanalysis for codebreaking. The success of English cryptologists in deciphering German telegram codes led to pivotal victories for the Royal Navy.

1917: American Edward Hebern created the first cryptography rotor machine by combining electrical circuitry with mechanical typewriter parts to automatically scramble messages. Users could type a plaintext message into a standard typewriter keyboard and the machine would automatically create a substitution cipher, replacing each letter with a randomized new letter to output ciphertext. The ciphertext could in turn be decoded by manually reversing the circuit rotor and then typing the ciphertext back into the Hebern Rotor Machine, producing the original plaintext message.

1918: In the aftermath of war, German cryptologist Arthur Scherbius developed the Enigma Machine, an advanced version of Hebern’s rotor machine, which also used rotor circuits to both encode plaintext and decode ciphertext. Used heavily by the Germans before and during WWII, the Enigma Machine was considered suitable for the highest level of top-secret cryptography. However, like Hebern’s Rotor Machine, decoding a message encrypted with the Enigma Machine required the advanced sharing of machine calibration settings and private keys that were susceptible to espionage and eventually led to the Enigma’s downfall.

1939-45: At the outbreak of World War II, Polish codebreakers fled Poland and joined many notable and famous British mathematicians—including the father of modern computing, Alan Turing—to crack the German Enigma cryptosystem, a critical breakthrough for the Allied Forces. Turing’s work specifically established much of the foundational theory for algorithmic computations. 

1975: Researchers working on block ciphers at IBM developed the Data Encryption Standard (DES)—the first cryptosystem certified by the National Institute for Standards and Technology (then known as the National Bureau of Standards) for use by the US Government. While the DES was strong enough to stymie even the strongest computers of the 1970s, its short key length makes it insecure for modern applications, but its architecture was and is highly influential in the advancement of cryptography.

1976: Researchers Whitfield Hellman and Martin Diffie introduced the Diffie-Hellman key exchange method for securely sharing cryptographic keys. This enabled a new form of encryption called asymmetric key algorithms. These types of algorithms, also known as public key cryptography, offer an even higher level of privacy by no longer relying on a shared private key. In public key cryptosystems, each user has their own private secret key which works in tandem with a shared public for added security.

1977: Ron Rivest, Adi Shamir and Leonard Adleman introduce the RSA public key cryptosystem, one of the oldest encryption techniques for secure data transmission still in use today. RSA public keys are created by multiplying large prime numbers, which are prohibitively difficult for even the most powerful computers to factor without prior knowledge of the private key used to create the public key.

2001: Responding to advancements in computing power, the DES was replaced by the more robust Advanced Encryption Standard (AES) encryption algorithm. Similar to the DES, the AES is also a symmetric cryptosystem, however, it uses a much longer encryption key that cannot be cracked by modern hardware.

Quantum cryptography, post-quantum cryptography and the future of encryption


The field of cryptography continues to evolve to keep pace with advancing technology and increasingly more sophisticated cyberattacks. Quantum cryptography (also known as quantum encryption) refers to the applied science of securely encrypting and transmitting data based on the naturally occurring and immutable laws of quantum mechanics for use in cybersecurity. While still in its early stages, quantum encryption has the potential to be far more secure than previous types of cryptographic algorithms, and, theoretically, even unhackable. 

Not to be confused with quantum cryptography which relies on the natural laws of physics to produce secure cryptosystems, post-quantum cryptographic (PQC) algorithms use different types of mathematical cryptography to create quantum computer-proof encryption.

According to the National Institute of Standards and Technology (NIST) (link resides outside ibm.com), the goal of post-quantum cryptography (also called quantum-resistant or quantum-safe) is to “develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks.”

Source: ibm.com