Showing posts with label IBM Cloud. Show all posts
Showing posts with label IBM Cloud. Show all posts

Wednesday, 16 October 2024

How well do you know your hypervisor and firmware?

How well do you know your hypervisor and firmware?

IBM Cloud Virtual Private Cloud (VPC) is designed for secured cloud computing, and several features of our platform planning, development and operations help ensure that design. However, because security in the cloud is typically a shared responsibility between the cloud service provider and the customer, it’s essential for you to fully understand the layers of security that your workloads run on here with us. That’s why here, we detail a few key security components of IBM Cloud VPC that aim to provide secured computing for our virtual server customers.

Let’s start with the hypervisor


The hypervisor, a critical component of any virtual server infrastructure, is designed to provide a secure environment on which customer workloads and a cloud’s native services can run. The entirety of its stack—from hardware and firmware to system software and configuration—must be protected from external tampering. Firmware and hypervisor software are the lowest layers of modifiable code and are prime targets of supply chain attacks and other privileged threats. Kernel-mode rootkits (also known as bootkits) are a type of privileged threat and are difficult to uncover by endpoint protection systems, such as antivirus and endpoint detection and response (EDR) software. They run before any protection system with the ability to obscure their presence and thus hide themselves. In short, securing the supply chain itself is crucial.

IBM Cloud VPC implements a range of controls to help address the quality, integrity and supply chain of the hardware, firmware and software we deploy, including qualification and testing before deployment.

IBM Cloud VPC’s 3rd generation solutions leverage pervasive code signing to protect the integrity of the platform. Through this process, firmware is digitally signed at the point of origin and signatures are authenticated before installation. At system start-up, a platform security module then verifies the integrity of the system firmware image before initialization of the system processor. The firmware, in turn, authenticates the hypervisor, including device software, thus establishing the system’s root of trust in the platform security module hardware.

Device configuration and verification


IBM Cloud Virtual Servers for VPC provide a wide variety of profile options (vCPU + RAM + bandwidth provisioning bundles) to help meet customers’ different workload requirements. Each profile type is managed through a set of product specifications. These product specifications outline the physical hardware’s composition, the firmware’s composition and the configuration for the server. The software includes, but is not limited to, the host firmware and component devices. These product profiles are developed and overseen by a hardware leadership team and are versioned for use across our fleet of servers.

As new hardware and software assets are brought into our IBM Cloud VPC environment, they’re also mapped to a product specification, which outlines their intended configuration. The intake verification process then validates that the server’s actual physical composition matches that of the specification before its entry into the fleet. If there’s a physical composition that doesn’t match the specification, the server is cordoned off for inspection and remediation. 

The intake verification process also verifies the firmware and hardware. 

There are two dimensions of this verification:

1. Firmware is signed by an approved supplier before it can be installed on an IBM Cloud Virtual Server for VPC system. This helps ensure only approved firmware is applied to the servers. IBM Cloud works with several suppliers to help ensure firmware is signed and components are configured to reject unauthorized firmware.

2. Only firmware that is approved through the IBM Cloud governed specification qualifies for installation. The governed specification is updated cyclically to add newly qualified firmware versions and remove obsolete versions. This type of firmware verification is also performed as part of the server intake process and before any firmware update.

Server configuration is also managed through the governed product specifications. Certain solutions might need custom unified extensible firmware interface (UEFI) configurations, certain features enabled or restrictions put in place. The product specification manages the configurations, which are applied through automation on the servers. Servers are scanned by IBM Cloud’s monitoring and compliance framework at run time.

Specification versioning and promotion


As mentioned earlier, the core components of the IBM Cloud VPC virtual server management process are the product specifications. Product specifications are definition files that contain the configurations for all server profiles maintained and are reviewed by the corresponding IBM Cloud product leader and governance-focused leadership team. Together, they control and manage the server’s approved components, configuration and firmware levels to be applied. The governance-focused leadership team strives for commonality where needed, whereas the product leaders focus on providing value and market differentiation.

It’s important to remember that specifications don’t stand still. These definition files are living documents that evolve as new firmware levels are released or the server hardware grows to support extra vendor devices. Because of this, the IBM Cloud VPC specification process is versioned to capture changes throughout the server’s lifecycle. Each server deployment captures the version of the specification that it was deployed with and provides identification of the intended versus actual state as well.

Promotion of specifications is also necessary. When a specification is updated, it doesn’t necessarily mean it’s immediately effective across the production environments. Instead, it moves through the appropriate development, integration and preproduction (staging) channels before moving to production. Depending on the types of devices or fixes being addressed, there might even be a varying schedule for how quickly the rollout occurs.

How well do you know your hypervisor and firmware?
Figure 1: IBM Cloud VPC specification promotion process

Firmware on IBM Cloud VPC is typically updated in waves. Where possible, it might be updated live, although some updates require downtime. Generally, this is unseen by our customers due to live migration. However, as the firmware updates roll through production, they can take time to move customers around. So, when a specification update is promoted through the pipeline, it then starts the update through the various runtime systems. The velocity of the update is generally dictated by the severity of the change.

How IBM Cloud VPC virtual servers set up a hardware root of trust


IBM Cloud Virtual Servers for VPC include root of trust hardware known as the platform security module. Among other functions, the platform security module hardware is designed to verify the authenticity and integrity of the platform firmware image before the main processor can boot. It verifies the image authenticity and signature using an approved certificate. The platform security module also stores copies of the platform firmware image. If the platform security module finds that the firmware image installed on the host was not signed with the approved certificate, the platform security module replaces it with one of its images before initializing the main processor.

Upon initialization of the main processor and loading of the system firmware, the firmware is then responsible for authenticating the hypervisor’s bootloader as part of a process known as secure boot, which aims to establish the next link in a chain of trust. The firmware verifies that the bootloader was signed using an authorized key before it was loaded. Keys are authorized when their corresponding public counterparts are enrolled in the server’s key database. Once the bootloader is cleared and loaded, it validates the kernel before the latter can run. Finally, the kernel validates all modules before they’re loaded onto the kernel. Any component that fails the validation is rejected, causing the system boot to halt.

The integration of secure boot with the platform security module aims to create a line of defense against the injection of unauthorized software through supply chain attacks or privileged operations on the server. Only approved firmware, bootloaders, kernels and kernel modules signed with IBM Cloud certificates and those of previously approved operating system suppliers can boot on IBM Cloud Virtual Servers for VPC.

The firmware configuration process described above includes the verification of firmware secure boot keys against the list of those initially approved. These consist of boot keys in the authorized keys database, the forbidden keys, the exchange key and the platform key.

Secure boot also includes a provision to enroll additional kernel and kernel module signing keys into the first stage bootloader (shim), also known as the machine owner key (mok). Therefore, IBM Cloud’s operating system configuration process is also designed so that only approved keys are enrolled in the mok facility.

Once a server passes all qualifications and is approved to boot, an audit chain is established that’s rooted in the hardware of the platform security module and extends to modules loaded into the kernel.

How well do you know your hypervisor and firmware?
Figure 2: IBM Cloud VPC secure boot audit chain

How do I use verified hypervisors on IBM Cloud VPC virtual servers?


Good question. Hypervisor verification is on by default for supported IBM Cloud Virtual Servers for VPC. Choose a generation 3 virtual server profile (such as bx3d, cx3d, mx3d or gx3), as shown below, to help ensure your virtual server instances run on hypervisor-verified supported servers. These capabilities are readily available as part of existing offerings and customers can take advantage by deploying virtual servers with a generation 3 server profile.

How well do you know your hypervisor and firmware?
Figure 3: IBM Cloud Virtual Servers for VPC, Generation 3

IBM Cloud continues to evolve its security architecture and enhances it by introducing new features and capabilities to help support our customers.

Source: ibm.com

Tuesday, 8 October 2024

New IBM study: How business leaders can harness the power of gen AI to drive sustainable IT transformation

New IBM study: How business leaders can harness the power of gen AI to drive sustainable IT transformation

As organizations strive to balance productivity, innovation and environmental responsibility, the need for sustainable IT practices is even more pressing. A new global study from the IBM Institute for Business Value reveals that emerging technologies, particularly generative AI, can play a pivotal role in advancing sustainable IT initiatives. However, successful transformation of IT systems demands a strategic and enterprise-wide approach to sustainability.

The power of generative AI in sustainable IT

Generative AI is creating new opportunities to transform IT operations and make them more sustainable. Teams can use this technology to quickly translate code into more energy-efficient languages, develop more sustainable algorithms and software and analyze code performance to optimize energy consumption. 27% of organizations surveyed are already applying generative AI in their sustainable IT initiatives, and 63% of respondents plan to follow suit by the end of 2024. By 2027, 89% are expecting to be using generative AI in their efforts to reduce the environmental impact of IT.

Despite the growing interest in using generative AI for sustainability initiatives, leaders must first consider its broader implications, particularly energy consumption.

64% say they are using generative AI and large language models, yet only one-third of those report having made significant progress in addressing its environmental impact. To bridge this gap, executives must take a thoughtful and intentional approach to generative AI, asking questions like, “What do we need to achieve?” and “What is the smallest model that we can use to get there?”

A holistic approach to sustainability

To have a lasting impact, sustainability must be woven into the very fabric of an organization, breaking free from traditional silos and incorporating it into every aspect of operations. Leading organizations are already embracing this approach, integrating sustainable practices across their entire operations, from data centers to supply chains, to networks and products. This enables operational efficiency by optimizing resource allocation and utilization, maximizing output and minimizing waste.

The results are telling: 98% of surveyed organizations that take a holistic, enterprise-wide approach to sustainable IT report seeing benefits in operational efficiency—compared to 50% that do not. The leading organizations also attribute greater reductions in energy usage and costs to their efforts. Moreover, they report impressive environmental benefits, with two times greater reduction in their IT carbon footprint.

Hybrid cloud and automation: key enablers of sustainable IT

Many organizations are turning to hybrid cloud and automation technologies to help reduce their environmental footprint and improve business performance. By providing visibility into data, workloads and applications across multiple clouds and systems, a hybrid cloud platform enables leaders to make data-driven decisions. This allows them to determine where to run their workloads, thereby reducing energy consumption and minimizing their environmental impact.

In fact, one quarter (25%) of surveyed organizations are already using hybrid cloud solutions to boost their sustainability and energy efficiency. Nearly half (46%) of those report a substantial positive impact on their overall IT sustainability. Automation is also playing a key role in this shift. With 83% of leading organizations harnessing its power to dynamically adjust IT environments based on demand.

Sustainable IT strategies for a better tomorrow

The future of innovation is inextricably linked to a deep commitment to sustainability. As business leaders harness the power of technology to drive impact, responsible decision-making is crucial, particularly in the face of emerging technologies such as generative AI. To better navigate this intersection of IT and sustainability, here are a few actions to consider: 

1. Actively manage the energy consumption associated with AI: Optimize the value of generative AI while minimizing its environmental footprint by actively managing energy consumption from development to deployment. For example, choose AI models that are designed for speed and energy efficiency to process information more effectively while reducing the computational power required.

2. Identify your environmental impact drivers: Understand how different elements of your IT estate influence environmental impacts and how this can change as you scale new IT efforts.

3. Embrace sustainable-by-design principles: Embed sustainability assessments into the design and planning stages of every IT project, by using a hybrid cloud platform to centralize control and gain better visibility across your entire IT estate.

Source: ibm.com

Thursday, 12 September 2024

How fintechs are helping banks accelerate innovation while navigating global regulations

How fintechs are helping banks accelerate innovation while navigating global regulations

Financial institutions are partnering with technology firms—from cloud providers to fintechs—to adopt innovations that help them stay competitive, remain agile and improve the customer experience. However, the biggest hurdle to adopting new technologies is security and regulatory compliance.

While third and fourth parties have the potential to introduce risk, they can also be the solution. As enterprises undergo their modernization journeys, fintechs are redefining digital transformation in ways that have never been seen before. This includes using hybrid cloud and AI technologies to provide their clients with the capabilities they need to modernize securely and rapidly while addressing existing and emerging legislation, such as the Digital Operational Resilience Act (DORA) in the EU.

The financial services industry needs to modernize, but it is challenging to build modern digital solutions on top of existing systems. These digital transformation projects can be costly, especially if not done correctly. It is critical for banks and other financial institutions to partner with a technology provider that can automate enterprise processes and enable them to manage their complex environments while prioritizing resilience, security and compliance. As the January deadline for DORA (which is designed to strengthen the operational resilience of the financial sector) approaches, it is critical that fintechs align their practices to support resilience and business continuity.

How FlowX.AI is modernizing mission-critical workloads with AI


Both IBM® and FlowX.AI have been on a mission to enable our clients to manage mission-critical workload challenges with ease. IBM designed its enterprise cloud for regulated industries with built-in controls and confidential computing capabilities to help customers across highly regulated industries (such as financial services, healthcare, public sector, telco, insurance, life sciences and more) to maintain control of their data and use new technologies with confidence.

FlowX.AI believes that the key to managing increasingly complex environments is AI, and its solution brings multiagent AI to banking modernization. The robust, scalable platform combines deep integration capabilities and connector technology with AI-enabled application development. FlowX.AI is designed to automate enterprise processes and integrate seamlessly with existing systems, from APIs and databases to mainframes. This enables enterprises to build and deploy powerful, secure applications in a fraction of the time traditionally required. Also, as a part of the IBM Cloud for Financial Services® ecosystem, the company is using the IBM Cloud Framework for Financial Services to address risk in the digital supply chain through a common set of security controls.

“Highly regulated industries like financial services are under immense pressure to adapt quickly to shifting market dynamics and deliver exceptional customer experiences, all while navigating increasingly complex regulatory requirements. Our platform is designed to address these challenges head-on, equipping banks with the tools to deliver rapidly and efficiently. By collaborating with IBM Cloud for Financial Services, FlowX.AI aims to simplify the complexity banks face and help them unlock innovation faster, while still prioritizing security and compliance.” – Ioan Iacob, CEO, FlowX.AI

How Swisschain is ushering in the next era of blockchain technology


The financial industry is on the verge of a significant transformation with the digitization of financial assets. In financial services, blockchain technology has made it easier to securely digitize assets to trade currencies, secure loans, process payments and more. However, as banks and other financial institutions are building their blockchain integrations, it can be difficult to maintain security, resilience and compliance.

To meet the evolving needs of the financial industry, Swisschain has developed a hybrid digital asset custody and tokenization platform that can be deployed either on premises or in the cloud. The platform is designed to allow financial institutions to securely manage high-value digital assets and to offer seamless integration with both public and permissioned blockchains. Swisschain’s multilayered security architecture is built to deliver protection of governance over private keys and policies. They offer root-level control, which aims to eliminate single points of failure. This can be a critical feature for institutions using high-value assets.

By using IBM Cloud Hyper Protect Services, Swisschain can tap into IBM’s “keep your own key” encryption capabilities, designed to allow clients exclusive key control over their assets and to help address privacy needs. Swisschain’s solution is designed to offer greater levels of scalability, agility and cost-effectiveness, to help financial institutions navigate the complex digital asset landscape with confidence and efficiency. In collaboration with IBM, Swisschain aims to set a new standard for innovation in the digital asset ecosystem.

“Tokenizing financial assets through blockchain technology is rapidly accelerating the digitization of the financial industry, fundamentally reshaping how we trade and manage assets. By converting traditional asset ownership into digital tokens, we enhance transparency, security and liquidity, making it easier for financial institutions to navigate this new landscape. Our goal is to provide the essential infrastructure that bridges traditional finance with digital assets. Collaborating with IBM Cloud for Financial Services has been a game-changer in our mission to lead the next era of blockchain and digital asset technology.” – Simon Olsen, CEO, Swisschain

Innovating at the pace of change


Modernization efforts vary across the financial services industry, but one thing is for certain: banks need to innovate at the pace of change or risk getting left behind. Having an ecosystem that incorporates fintech, cloud and AI technology will enable large financial institutions to remain resilient, secure and compliant as they serve their customers.

With IBM Cloud for Financial Services, IBM is positioned to help fintechs ensure that their products and services are compliant and adhere to the same stringent regulations that banks must meet. With security and controls built into the cloud platform and designed by the industry, we aim to help fintechs and larger financial institutions minimize risk, stay on top of evolving regulations and accelerate cloud and AI adoption.

Source: ibm.com

Friday, 6 September 2024

Primary storage vs. secondary storage: What’s the difference?

Primary storage vs. secondary storage: What’s the difference?

What is primary storage?


Computer memory is prioritized according to how often that memory is required for use in carrying out operating functions. Primary storage is the means of containing primary memory (or main memory), which is the computer’s working memory and major operational component. The main or primary memory is also called “main storage” or “internal memory.” It holds relatively concise amounts of data, which the computer can access as it functions.

Because primary memory is so frequently accessed, it’s designed to achieve faster processing speeds than secondary storage systems. Primary storage achieves this performance boost by its physical location on the computer motherboard and its proximity to the central processing unit (CPU).

By having primary storage closer to the CPU, it’s easier to both read and write to primary storage, in addition to gaining quick access to the programs, data and instructions that are in current use and held within primary storage. 

What is secondary storage?


External memory is also known as secondary memory and involves secondary storage devices that can store data in a persistent and ongoing manner. Because they can be used with an interruptible power supply, secondary storage devices are said to provide non-volatile storage.

These data storage devices can safeguard long-term data and establish operational permanence and a lasting record of existing procedures for archiving purposes. This makes them the perfect hosts for housing data backups, supporting disaster recovery efforts and maintaining the long-term storage and data protection of essential files.

How computer memory mimics human memory


To further understand the differences between primary storage and secondary storage, consider how human beings think. Each day, people are mentally bombarded by a startling amount of incoming data.

  • Personal contacts: The average American makes or receives 6 phone calls per day, as well as sends or receives approximately 32 texts.
  • Work data: In addition, most people are also engaged in work activities that involve incoming organizational data via any number of business directives or communiques.
  • Advertising: It’s been estimated that the average person is exposed to as many as 10,000 advertisements or sponsored messages per day. Subtracting 8 hours for an average night’s sleep, that equates to a person being exposed to an advertising message every 5.76 seconds that they’re awake.
  • News: The advertising figure does not include media-conveyed news information, which we’re receiving in an increasing number of formats. In many current television news programs, a single screen is being used to simultaneously transmit several types of information. For example, a news program might feature a video interview with a newsmaker while a scroll at the bottom of the screen announces breaking news headlines and a sidebar showcases latest stock market updates.
  • Social media: Nor does that figure account for the growing and pervasive influence of social media. Through social media websites, messaging boards and online communities, people are absorbing even more data.

Clearly, this is a lot of incoming information to absorb and process. From the moment we awake until we return to sleep, our minds scan through all this possible data, making a near-endless series of minute judgments about what information to retain and what to ignore. In most instances, that decision comes down to utility. If the mind perceives that this information will need to be recalled again, that data is awarded a higher order of mental priority.

These prioritization decisions happen with such rapid frequency that our minds are trained to input this data without truly realizing it, leaving it to the human mind to sort out how primary and secondary memory is allocated. Fortunately, the human mind is quite adept at managing such multitasking, as are modern computers.

An apt analogy exists between how the human mind works and how computer memory is managed. In the mind, a person’s short-term memory is more dedicated to the most pressing and “current” cognitive needs. This might include data such as an access code used for personal banking, the scheduled time of an important medical appointment or the contact information of current business clients. In other words, it’s information of the highest anticipated priority. Similarly, primary storage is concerned with the computer’s most pressing processing needs.

Secondary data storage, on the other hand, offers long-term storage, like a person’s long-term memory. Secondary storage tends to operate with less frequency and can require more computer processing power to retrieve long-stored data. In this way, it mirrors the same retention and processing as long-term memory. Examples of long-term memory for a human could include a driver’s license number, long-retained facts or a spouse’s phone number.

Memory used in primary storage


Numerous forms of primary storage memory dominate any discussion of computer science:

  • Random Access Memory (RAM): The most vitally important type of memory, RAM handles and houses numerous key processes, including system apps and processes the computer is currently managing. RAM also serves as a kind of launchpad for files or apps.
  • Read-Only Memory (ROM): ROM allows viewing of contents but does not allow viewers to make changes to collected data. ROM is non-volatile storage because its data remains even when the computer is turned off.
  • Cache memory: Another key form of data storage that stores data that is often retrieved and used. Cache memory contains less storage capacity than RAM but is faster than RAM.
  • Registers: The fastest data access times of all are posted by registers, which exist within CPUs and store data to achieve the goal of immediate processing.
  • Flash memory: Flash memory offers non-volatile storage that allows data to be written and saved (as well as be re-written and re-saved). Flash memory also enables speedy access times. Flash memory is used in smartphones, digital cameras, universal serial bus (USB) memory sticks and flash drives.
  • Cloud storage: Cloud storage might operate as primary storage, in certain circumstances. For example, organizations hosting apps in their own data centers require some type of cloud service for storage purposes.
  • Dynamic Random-Access Memory (DRAM): A type of RAM-based semiconductor memory, DRAM features a design that relegates each data bit to a memory cell that houses a tiny capacitor and transistor. DRAM is non-volatile memory thanks to a memory refresh circuit inside the DRAM capacitor. DRAM is most often used in creating a computer’s main memory.
  • Static Random-Access Memory (SRAM): Another type of RAM-based semiconductor memory, SRAM’s architecture is based around a latching, flip-flop circuitry for data storage. SRAM is volatile storage that sacrifices its data when power is removed from the system. However, when it is operational, it provides faster processing than DRAM, which often drives SRAM’s price upward. SRAM is typically used within cache memory and registers.

Memory used in secondary storage


There are three forms of memory commonly used in secondary storage:

  • Magnetic storage: Magnetic storage devices access data that’s written onto a spinning metal disk that contains magnetic fields.
  • Optical storage: If a storage device uses a laser to read data off a metal or plastic disk that contains grooves (much like an audio LP), that’s considered optical storage.
  • Solid state storage: Solid state storage (SSS) devices are powered by electronic circuits. Flash memory is commonly used in SSS devices, although some use random-access memory (RAM) with battery backup. SSS offers high-speed data transfer and high performance, although its financial costs when compared to magnetic storage and optical storage can prove prohibitive.

Types of primary storage devices


Storage resources are designated as primary storage according to their perceived utility and how that resource is used. Some observers incorrectly assume that primary storage depends upon the storage space of a particular storage medium, the amount of data contained within its storage capacity or its specific storage architecture. It’s actually not about how a storage medium might store information. It’s about the anticipated utility of that storage media.

Through this utility-based focus, it’s possible for primary storage devices to take multiple forms:

  • Hard disk drives (HDDs)
  • Flash-based solid-state drives (SSDs)
  • Shared storage area network (SAN)
  • Network attached storage (NAS)

Types of secondary storage devices


While some forms of secondary memory are internally based, there are also secondary storage devices that are external in nature. External storage devices (also called auxiliary storage devices) can be easily unplugged and used with other operating systems, and offer non-volatile storage:

  • HDDs
  • Floppy disks
  • Magnetic tape drives
  • Portable hard drives
  • Flash-based solid-state drives
  • Memory cards
  • Flash drives
  • USB drives
  • DVDs
  • CD-ROMs
  • Blu-ray Discs
  • CDs 
Source: ibm.com

Tuesday, 20 August 2024

The power of embracing distributed hybrid infrastructure

The power of embracing distributed hybrid infrastructure

Data is the greatest asset to help organizations improve decision-making, fuel growth and boost competitiveness in the marketplace. But today’s organizations face the challenge of managing vast amounts of data across multiple environments.

This is why understanding the uniqueness of your IT processes, workloads and applications demands a workload placement strategy based on key factors such as the type of data, necessary compute capacity and performance needed and meeting your regulatory security and compliance requirements.

While hybrid cloud has become the dominant IT architecture, we believe that adopting an intentional hybrid-by-design approach is pivotal for enterprises to use their data irrespective of where it resides to further drive business value and outcomes with the combined power of hybrid cloud and AI. A distributed hybrid infrastructure provides the flexibility and agility to deploy and operate workloads and applications wherever needed. This allows for reliable and secured cloud-connected experiences that pave way for speedy innovation with IT environments designed to be both open and continuous.

Harnessing IBM Power as-a-service in distributed infrastructure


Clients that are furthest along in their hybrid cloud journey have well-thought-out, hybrid-by-design strategies. Not only are they making intentional workload placement decisions, are also designing an infrastructure with interoperability and security at the forefront. We are helping our clients modernize workloads and infrastructure with a hybrid cloud experience.

IBM® Power® Virtual Server, for example, can help clients expand their on-premises servers to modern-day hybrid-cloud infrastructures. Within a distributed hybrid environment, IBM Power Virtual Server is designed to help clients quickly adopt and expand their on-premises infrastructures both efficiently and economically at any moment to remain competitive in the marketplace. Its validation under the IBM Cloud Framework for Financial Services® also ensures compliance with stringent industry standards, making it particularly valuable for regulated sectors. A great example of this would be our client Safeguards CS Sdn Bhd (SCS), a cash solution services provider in Malaysia. By optimizing costs and maintaining robust security, this approach is designed to support billions of daily banking transactions across Southeast Asia, highlighting the platform’s critical role in expanding financial services to underserved populations.

Further, to provide clients with an additional choice of where to use IBM Power, IBM recently released IBM Power Virtual Server Private Cloud, which combines configurable compute, storage and network infrastructure within your data center, owned and managed by IBM on IBM Cloud®. This setup provides enterprises the consumption and management capabilities of the cloud while the data remains on premises to help clients address their regional compliance and governance requirements of the business.

A path forward: Power through an XaaS lens


The future of cloud computing lies in adopting distributed hybrid infrastructure, bolstered by the XaaS model, which promotes agility, reliability and security. This approach is designed so that businesses can modernize applications, enhance data management and optimize IT operations, paving the way for a more resilient and cost-effective IT landscape. IBM Power Virtual Server stands at the forefront of this transformation, offering innovative XaaS solutions to meet the diverse needs of modern enterprises.

Source: ibm.com

Wednesday, 14 August 2024

Seamless cloud migration and modernization: overcoming common challenges with generative AI assets and innovative commercial models

Seamless cloud migration and modernization: overcoming common challenges with generative AI assets and innovative commercial models

As organizations continue to adopt cloud-based services, it’s more pressing to migrate and modernize infrastructure, applications and data to the cloud to stay competitive. Traditional migration and modernization approach often involve manual processes, leading to increased costs, delayed time-to-value and increased risk.

Cloud migration and modernization can be complex and time-consuming processes that come with unique challenges; meanwhile there are many benefits to gen AI assets and assistants and innovative commercial models. Cloud Migration and Modernization Factory from IBM Consulting® can also help organizations overcome common migration and modernization challenges and achieve a faster, more efficient and more cost-effective migration and modernization experience.

Leveraging the same technologies that are driving market change, IBM Consulting can deliver value at the speed that tomorrow’s enterprises need today. This transformation starts with a new relationship between consultants and code—one that can help deliver solutions and value more quickly, repeatably and cost efficiently. 

The power of gen AI assets and assistants


Gen AI assets and assistants are revolutionizing the cloud migration and modernization landscape, which offer a more efficient, automated and cost-effective way to overcome common migration challenges. These tools leverage machine learning and artificial intelligence to automate manual processes, reducing the need for human intervention and minimizing the risk of errors and rework.

IBM Consulting Assistants are a library of role-based AI assistants that are trained on IBM proprietary data to support key consulting project roles and tasks. Accessed through a conversation-based interface, we’ve democratized the way consultants use assistants, creating an experience where our people can find, create and continually refine assistants to meet the needs of our clients faster.

IBM Consulting Assistants allow our consultants to select from the models that best solve your business challenge.  Those models are packaged with pre-engineered prompts and output formats so our people can get tailored outputs to their queries, such as creating a detailed user persona or a code for a specific language and function. The result is that you get more valuable work, faster.  

Innovative commercial models for migration and modernization


Our innovative commercial models, such as our cloud migration services, offer a flexible and cost-effective way to migrate and modernize applications and data to the cloud. Our pricing models are designed to help organizations reduce costs and increase ROI, while also promoting a smooth and successful migration experience.

Cloud Migration and Modernization Factory from IBM Consulting


As a leading provider of hybrid cloud transformation services, IBM has extensive expertise in helping organizations overcome common migration and modernization challenges. Our experts have developed gen AI tools and innovative commercial models to ensure successful cloud migration and modernization.

The Cloud Migration and Modernization Factory from IBM Consulting enables clients to realize business value faster by leveraging pre-built migration patterns and automated migration approaches. This means that organizations can achieve faster deployment and ramp-up, getting to market faster and realizing business benefits sooner.

With Cloud Migration and Modernization from IBM Consulting, clients can achieve:

  • Faster business value realization: The Cloud Migration and Modernization Factory from IBM Consulting accelerates business value realization by leveraging pre-built migration patterns and automated approaches. This enables organizations to deploy and ramp-up faster, getting to market sooner and realizing benefits earlier.   
  • Scaled automation: The Cloud Migration and Modernization Factory from IBM Consulting leverages cloud-based metrics and KPIs to enable scaled automation, ensuring consistent quality and outcomes across multiple migrations. Automated approaches reduce the risk of human error, manual testing and validation, which result in improved efficiency, quality and ROI.
  • Improved efficiency and quality of outcomes: By leveraging our gen AI assets, clients can automate the migration and modernization process, reducing manual effort and minimizing errors. The IBM Consulting Cloud Migration and Modernization Factory offers a library of pre-built migration patterns, allowing clients to choose the right approach for their specific needs and use cases.
  • Cost savings: The Cloud Migration and Modernization Factory from IBM Consulting reduces the total cost of ownership and increases ROI by leveraging pre-built migration patterns and automated approaches, minimizing manual effort and errors.

Overcome common migration challenges


Cloud migration and modernization can be a complex process, but with the power of gen AI assets and assistants and innovative commercial models, organizations can overcome common migration challenges and achieve a faster, more efficient and more cost-effective migration experience. By automating manual processes, reducing the need for human intervention and minimizing the risk of errors and rework, gen AI tools can help organizations achieve significant cost savings and increased ROI.

Source: ibm.com

Friday, 2 August 2024

Harnessing XaaS to reduce costs, risks and complexity

Harnessing XaaS to reduce costs, risks and complexity

To drive fast-paced innovation, enterprises are demanding models that focus on business outcomes, as opposed to only measuring IT results. At the same time, these enterprises are under increasing pressure to redesign their IT estates in order to lower cost and risk and reduce complexity.

To meet these challenges, Everything as a Service (XaaS) is emerging as the solution that can help address these challenges by simplifying operations, reducing risk and accelerating digital transformation. According to an IDC white paper sponsored by IBM®, by 2028, 80% of IT buyers will prioritize XaaS consumption for key workloads that require flexibility to help optimize IT spending, augment IT Ops skills and attain key sustainability metrics.

Moving forward, we see three pivotal insights that will continue to shape the future direction of businesses in the coming years.

Simplify IT to accelerate business outcomes and focus on ROI


The need to overhaul legacy IT infrastructures is a significant pressure point for enterprises. The applications that we are writing today will be the applications that we need to modernize tomorrow.

With XaaS offerings, enterprises are able to integrate business-critical applications into a modernized hybrid environment, particularly in AI applications and workloads.

CrushBank, for example, worked with IBM to transform its IT support, streamlining help desk operations and arming staff with improved information. This created a 45% reduction in resolution time and notably enhanced customer experiences. CrushBank has reported that with the power of watsonx™ on IBM Cloud®, customers have shared feedback of higher satisfaction and efficiency allowing the organization to spend time with the people that matter the most: their clients.

Reimagine business models to foster rapid innovation


AI is fundamentally altering how business is done. Traditional business models, often constrained by their complexity and cost-intensive nature, are proving inadequate for the agility required in an AI-driven marketplace. According to recent IDC research, sponsored by IBM, 78% of IT organizations view XaaS as a critical component of their future strategies.

To meet this demand for rapid innovation and address the accompanying risks and costs, businesses see the benefits of turning to XaaS. Rather than merely providing tools, this model focuses on delivering outcomes for greater operational efficiency and effectiveness. The model allows XaaS vendors to focus on secure, resilient and scalable services, enabling IT organizations to invest their precious resources in their client requirements.

Anticipate for tomorrow by preparing for today


The shift toward an XaaS model is not just about optimizing IT spending; it is also about augmenting IT operations skills and achieving business goals faster and in a more agile manner.

At Think, CrushBank’s CTO David Tan highlighted how they enabled clients to innovate and effectively leverage data seamlessly where it resides, allowing them to craft a holistic strategy to meet the unique business needs for each of their customers. Enabling a simpler, faster and more economical path to leverage AI, while also reducing the risk and burden of managing complex IT architectures, remains paramount for companies operating in today’s data-driven environment.

The momentum toward XaaS stands out as a strategic solution that offers a multitude of benefits. From helping to reduce operational risks and costs to enabling rapid adoption of emerging technologies like AI, XaaS should be the cornerstone of every IT strategy.

IBM’s current as-a-service initiative can help enterprises achieve those benefits today. The combined capabilities across IBM software and infrastructure help clients drive outcomes, while helping to ensure that mission-critical workloads stay secured and compliant.

For example, IBM Power Virtual Server is designed to assist leading enterprises across the globe to successfully expand their on-premises servers to hybrid cloud infrastructures, granting leaders more insight into their businesses. Also, the IBM team is working collaboratively with our customers to modernize with AI, with offerings like watsonx Code Assistant™ for Java code or enterprise applications.

Enterprises are under increasing pressure to redesign their legacy IT estates—to lower cost and risk and reduce complexity. XaaS is emerging as the solution that can address these challenges head on by simplifying operations, enhancing resilience and accelerating digital transformation. IBM aims to meet our clients where they are on their transformation journey.

Source: ibm.com

Friday, 5 July 2024

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

IBM Storage Defender is a purpose-built end-to-end data resilience solution designed to help businesses rapidly restart essential operations in the event of a cyberattack or other unforeseen events. It simplifies and orchestrates business recovery processes by providing a comprehensive view of data resilience and recoverability across primary and  auxiliary storage in a single interface.

IBM Storage Defender deploys AI-powered sensors to quickly detect threats and anomalies. Signals from all available sensors are aggregated by IBM Storage Defender, whether they come from hardware (IBM FlashSystem FlashCore Modules) or software (file system or backup-based detection).

IBM Storage FlashSystem with FlashCore Module 4 (FCM4) can identify threats in real-time by building into the hardware, collect and analyze stats for every single read and write operation without any performance impact. IBM Storage Defender and IBM Storage FlashSystem can seamlessly work together to produce a multilayered strategy that can drastically reduce the time needed to detect a ransomware attack.

As shown in the following diagram, the FlashCore Module reports potential threat activity to IBM Storage Insights Pro, which analyzes the data and alerts IBM Storage Defender about suspicious behaviors coming from the managed IBM Storage FlashSystem arrays.  With the information received, IBM Storage Defender proactively opens a case.  All open cases are presented in a comprehensive “Open case” screen, which provides detailed information about the type of anomaly, time and date of the event, affected virtual machines and impacted storage resources. To streamline data recovery, IBM Storage Defender provides recommended actions and built-in automation to further accelerate the return of vital operations to their normal state.

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

IBM Storage FlashSystem also offers protection through immutable copies of data known as Safeguarded Copies, which are isolated from production environments and cannot be modified or deleted. IBM Storage Defender can recover workloads directly from the most recent trusted Safeguarded Copy to significantly reduce the time needed to resume critical business operations, as data transfer is performed through the SAN (FC or iSCSI) rather than over the network.  In addition, workloads can be restored in an isolated “Clean Room” environment to be analyzed and validated before being recovered to production systems. This verification allows you to know with certainty that the data is clean and business operations can be safely reestablished. This is shown in the following diagram.

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

When a potential threat is detected, IBM Storage Defender correlates the specific volume in the IBM Storage FlashSystem associated with the virtual machine under attack and proactively takes a Safeguarded Copy to create a protected backup of the affected volume for offline investigation and follow-up recovery operations. When time is crucial, this rapid, automatic action can significantly reduce the time between receiving the alert, containing the attack and subsequent recovery. This proactive action is shown in the following diagram.

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

Ensuring business continuity is essential to build operational resilience and trust, IBM Storage Defender and IBM Storage FlashSystem can be seamlessly integrated to achieve this goal by combining advanced capabilities that complement each other to build a robust data resilience strategy across primary and auxiliary storage. By working together, IBM Storage Defender and IBM Storage FlashSystem effectively combat cyberattacks and other unforeseen threats.

Source: ibm.com

Thursday, 20 June 2024

The recipe for RAG: How cloud services enable generative AI outcomes across industries

The recipe for RAG: How cloud services enable generative AI outcomes across industries

According to research from IBM, about 42 percent of enterprises surveyed have AI in use in their businesses. Of all the use cases, many of us are now extremely familiar with natural language processing AI chatbots that can answer our questions and assist with tasks such as composing emails or essays. Yet even with widespread adoption of these chatbots, enterprises are still occasionally experiencing some challenges. For example, these chatbots can produce inconsistent results as they’re pulling from large data stores that might not be relevant to the query at hand.

Thankfully, retrieval-augmented generation (RAG) has emerged as a promising solution to ground large language models (LLMs) on the most accurate, up-to-date information. As an AI framework, RAG works to improve the quality of LLM-generated responses by grounding the model on sources of knowledge to supplement the LLM’s internal representation of information. IBM unveiled its new AI and data platform, watsonx, which offers RAG, back in May 2023.

In simple terms, leveraging RAG is like making the model take an open book exam as you are asking the chatbot to respond to a question with all the information readily available. But how does RAG operate at an infrastructure level? With a mixture of platform-as-a-service (PaaS) services, RAG can run successfully and with ease, enabling generative AI outcomes for organizations across industries using LLMs.

How PaaS services are critical to RAG


Enterprise-grade AI, including generative AI, requires a highly sustainable, compute- and data-intensive distributed infrastructure. While the AI is the key component of the RAG framework, other “ingredients” such as PaaS solutions are integral to the mix. These offerings, specifically serverless and storage offerings, operate diligently behind the scenes, enabling data to be processed and stored more easily, which provides increasingly accurate outputs from chatbots.

Serverless technology supports compute-intensive workloads, such as those brought forth by RAG, by managing and securing the infrastructure around them. This gives time back to developers, so they can concentrate on coding. Serverless enables developers to build and run application code without provisioning or managing servers or backend infrastructure.

If a developer is uploading data into an LLM or chatbot but is unsure of how to preprocess the data so it’s in the right format or filtered for specific data points, IBM Cloud Code Engine can do all this for them—easing the overall process of getting correct outputs from AI models. As a fully managed serverless platform, IBM Cloud Code Engine can scale the application with ease through automation capabilities that manage and secure the underlying infrastructure.

Additionally, if a developer is uploading the sources for LLMs, it’s important to have highly secure, resilient and durable storage. This is especially critical in highly regulated industries such as financial services, healthcare and telecommunications.

IBM Cloud Object Storage, for example, provides security and data durability to store large volumes of data. With immutable data retention and audit control capabilities, IBM Cloud Object Storage supports RAG by helping to safeguard your data from tampering or manipulation by ransomware attacks and helps ensure it meets compliance and business requirements.

With IBM’s vast technology stack including IBM Code Engine and Cloud Object Storage, organizations across industries can seamlessly tap into RAG and focus on leveraging AI more effectively for their businesses.

The power of cloud and AI in practice


We’ve established that RAG is extremely valuable for enabling generative AI outcomes, but what does this look like in practice?

Blendow Group, a leading provider of legal services in Sweden, handles a diverse array of legal documents—dissecting, summarizing and evaluating these documents that range from court rulings to legislation and case law. With a relatively small team, Blendow Group needed a scalable solution to aid their legal analysis. Working with IBM Client Engineering and NEXER, Blendow Group created an innovative AI-driven tool, leveraging the comprehensive capabilities of  to enhance research and analysis, and streamlines the process of creating legal content, all while maintaining the utmost confidentiality of sensitive data.

Utilizing IBM’s technology stack, including IBM Cloud Object Storage and IBM Code Engine, the AI solution was tailored to increase the efficiency and breadth of Blendow’s legal document analysis.

The Mawson’s Huts Foundation is also an excellent example of leveraging RAG to enable greater AI outcomes. The foundation is on mission to preserve the Mawson legacy, which includes Australia’s 42 percent territorial claim to the Antarctic and educate schoolchildren and others about Antarctica itself and the importance of sustaining its pristine environment.

With The Antarctic Explorer, an AI-powered learning platform running on IBM Cloud, Mawson is bringing children and others access to Antarctica from a browser wherever they are. Users can submit questions via a browser-based interface and the learning platform uses AI-powered natural language processing capabilities provided by IBM watsonx Assistant to interpret the questions and deliver appropriate answers with associated media—videos, images and documents—that are stored in and retrieved from IBM Cloud Object Storage.

By leveraging infrastructure as-a-service offerings in tandem with watsonx, both the Mawson Huts Foundation and Blendow Group are able to gain greater insights from their AI models by easing the process of managing and storing the data that is contained within them.

Enabling Generative AI outcomes with the cloud


Generative AI and LLMs have already proven to have great potential for transforming organizations across industries. Whether it’s educating the wider population or analyzing legal documents, PaaS solutions within the cloud are critical for the success of RAG and running AI models.

At IBM, we believe that AI workloads will likely form the backbone of mission-critical workloads and ultimately house and manage the most-trusted data, so the infrastructure around it must be trustworthy and resilient by design. With IBM Cloud, enterprises across industries using AI can tap into higher levels of resiliency, performance, security, compliance and total cost of ownership.

Source: ibm.com

Saturday, 15 June 2024

Types of central processing units (CPUs)

Types of central processing units (CPUs)

What is a CPU?


The central processing unit (CPU) is the computer’s brain. It handles the assignment and processing of tasks and manages operational functions that all types of computers use.

CPU types are designated according to the kind of chip that they use for processing data. There’s a wide variety of processors and microprocessors available, with new powerhouse processors always in development. The processing power CPUs provide enables computers to engage in multitasking activities. Before discussing the types of CPUs available, we should clarify some basic terms that are essential to our understanding of CPU types.

Key CPU terms


There are numerous components within a CPU, but these aspects are especially critical to CPU operation and our understanding of how they operate:

  • Cache: When it comes to information retrieval, memory caches are indispensable. Caches are storage areas whose location allows users to quickly access data that’s been in recent use. Caches store data in areas of memory built into a CPU’s processor chip to reach data retrieval speeds even faster than random access memory (RAM) can achieve. Caches can be created through software development or hardware components.
  • Clock speed: All computers are equipped with an internal clock, which regulates the speed and frequency of computer operations. The clock manages the CPU’s circuitry through the transmittal of electrical pulses. The delivery rate of those pulses is termed clock speed, which is measured in Hertz (Hz) or megahertz (MHz). Traditionally, one way to increase processing speed has been to set the clock to run faster than normal.
  • Core: Cores act as the processor within the processor. Cores are processing units that read and carry out various program instructions. Processors are classified according to how many cores are embedded into them. CPUs with multiple cores can process instructions considerably faster than single-core processors. (Note: The term “Intel® Core™” is used commercially to market Intel’s product line of multi-core CPUs.)
  • Threads: Threads are the shortest sequences of programmable instructions that an operating system’s scheduler can independently administer and send to the CPU for processing. Through multithreading—the use of multiple threads running simultaneously—a computer process can be run concurrently. Hyper-threading refers to Intel’s proprietary form of multithreading for the parallelization of computations.

Other components of the CPU


In addition to the above components, modern CPUs typically contain the following:

  • Arithmetic logic unit (ALU): Carries out all arithmetic operations and logical operations, including math equations and logic-based comparisons. Both types are tied to specific computer actions.
  • Buses: Ensures proper data transfer and data flow between components of a computer system.
  • Control unit: Contains intensive circuitry that controls the computer system by issuing a system of electrical pulses and instructs the system to carry out high-level computer instructions.
  • Instruction register and pointer: Displays location of the next instruction set to be executed by the CPU.
  • Memory unit: Manages memory usage and the flow of data between RAM and the CPU. Also, the memory unit supervises the handling of cache memory.
  • Registers: Provides built-in permanent memory for constant, repeated data needs that must be handled regularly and immediately.

How do CPUs work?


CPUs use a type of repeated command cycle that’s administered by the control unit in association with the computer clock, which provides synchronization assistance.

The work a CPU does occurs according to an established cycle (called the CPU instruction cycle). The CPU instruction cycle designates a certain number of repetitions, and this is the number of times the basic computing instructions will be repeated, as enabled by that computer’s processing power.

The three basic computing instructions are as follows:

  • Fetch: Fetches occur anytime data is retrieved from memory.
  • Decode: The decoder within the CPU translates binary instructions into electrical signals, which engage with other parts of the CPU.
  • Execute: Execution occurs when computers interpret and carry out a computer program’s set of instructions.

Basic attempts to generate faster processing speeds have led some computer owners to forego the usual steps involved in creating high-speed performance, which normally require the application of more memory cores. Instead, these users adjust the computer clock so it runs faster on their machine(s). The “overclocking” process is analogous to “jailbreaking” smartphones so their performance can be altered. Unfortunately, like jailbreaking a smartphone, such tinkering is potentially harmful to the device and is roundly disapproved by computer manufacturers.

Types of central processing units


CPUs are defined by the processor or microprocessor driving them:

  • Single-core processor: A single-core processor is a microprocessor with one CPU on its die (the silicon-based material to which chips and microchips are attached). Single-core processors typically run slower than multi-core processors, operate on a single thread and perform the instruction cycle sequence only once at a time. They are best suited to general-purpose computing.
  • Multi-core processor: A multi-core processor is split into two or more sections of activity, with each core carrying out instructions as if they were completely distinct computers, although the sections are technically located together on a single chip. For many computer programs, a multi-core processor provides superior, high-performance output.
  • Embedded processor: An embedded processor is a microprocessor expressly engineered for use in embedded systems. Embedded systems are small and designed to consume less power and be contained within the processor for immediate access to data. Embedded processors include microprocessors and microcontrollers.
  • Dual-core processor: A dual-core processor is a multi-core processor containing two microprocessors that act independently from each other.
  • Quad-core processor: A quad-core processor is a multi-core processor that has four microprocessors functioning independently.
  • Octa-core: An octa-core processor is a multi-core processor that has eight microprocessors functioning independently.
  • Deca-core processor: A deca-core processor is an integrated circuit that has 10 cores on one die or per package.

Leading CPU manufacturers and the CPUs they make


Although several companies manufacture products or develop software that supports CPUs, that number has dwindled down to just a few major players in recent years.

The two major companies in this area are Intel and Advanced Micro Devices (AMD). Each uses a different type of instruction set architecture (ISA). Intel processors use a complex instruction set computer (CISC) architecture. AMD processors follow a reduced instruction set computer (RISC) architecture.

  • Intel: Intel markets processors and microprocessors through four product lines. Its premium, high-end line is Intel Core. Intel’s Xeon® processors are targeted toward offices and businesses. Intel’s Celeron® and Intel Pentium® lines are considered slower and less powerful than the Core line.
  • Advanced Micro Devices (AMD): AMD sells processors and microprocessors through two product types: CPUs and APUs (which stands for accelerated processing units). APUs are CPUs that have been equipped with proprietary Radeon™ graphics. AMD’s Ryzen™ processors are high-speed, high-performance microprocessors intended for the video game market. Athlon™ processors was formerly considered AMD’s high-end line, but AMD now uses it as a basic computing alternative.
  • Arm: Although Arm doesn’t actually manufacture equipment, it does lease out its valued, high-end processor designs and/or other proprietary technologies to other companies who do make equipment. Apple, for example, no longer uses Intel chips in Mac® CPUs but makes its own customized processors based on Arm designs. Other companies are following this example.

Related CPU and processor concepts


Graphics processing unit (GPUs)

While the term “graphics processing unit” includes the word “graphics,” this phrasing does not truly capture what GPUs are about, which is speed. In this instance, its increased speed is the cause of accelerating computer graphics.


The GPU is a type of electronic circuit with immediate applications for PCs, smartphones and video game consoles, which was their original use. Now GPUs also serve purposes unrelated to graphics acceleration, like cryptocurrency mining and the training of neural networks.

Microprocessors

The quest for computer miniaturization continued when computer science created a CPU so small that it could be contained within a small integrated circuit chip, called the microprocessor. Microprocessors are designated by the number of cores they support.

A CPU core is “the brain within the brain,” serving as the physical processing unit within a CPU. Microprocessors can contain multiple processors. Meanwhile, a physical core is a CPU built right into a chip, but which only occupies one socket, thus enabling other physical cores to tap into the same computing environment.

Output devices

Computing would be a vastly limited activity without the presence of output devices to execute the CPU’s sets of instruction. Such devices include peripherals, which attach to the outside of a computer and vastly increase its functionality.

Peripherals provide the means for the computer user to interact with the computer and get it to process instructions according to the computer user’s wishes. They include desktop essentials like keyboards, mice, scanners and printers.

Peripherals are not the only attachments common to the modern computer. There are also input/output devices in wide use and they both receive information and transmit information, like video cameras and microphones.

Power consumption

Several issues are impacted by power consumption. One of them is the amount of heat produced by multi-core processors and how to dissipate excess heat from that device so the computer processor remains thermally protected. For this reason, hyperscale data centers (which house and use thousands of servers) are designed with extensive air-conditioning and cooling systems.

There are also questions of sustainability, even if we’re talking about a few computers instead of a few thousand. The more powerful the computer and its CPUs, the more energy will be required to support its operation—and in some macro-sized cases, that can mean gigahertz (GHz) of computing power.

Specialized chips

The most profound development in computing since its origins, artificial intelligence (AI) is now impacting most if not all computing environments. One development we’re seeing in the CPU space is the creation of specialty processors that have been built specifically to handle the large and complex workloads associated with AI (or other specialty purposes):

  • Such equipment includes the Tensor Streaming Processor (TSP), which handles machine learning (ML) tasks in addition to AI applications. Other products equally suited to AI work are the AMD Ryzen Threadripper™ 3990X 64-Core processor and the Intel Core i9-13900KS Desktop Processor, which uses 24 cores.
  • For an application like video editing, many users opt for the Intel Core i7 14700KF 20-Core, 28-thread CPU. Still others select the Ryzen 9 7900X, which is considered AMD’s best CPU for video editing purposes.
  • In terms of video game processors, the AMD Ryzen 7 5800X3D features a 3D V-Cache technology that helps it elevate and accelerate game graphics.
  • For general-purpose computing, such as running an OS like Windows or browsing multimedia websites, any recent-model AMD or Intel processor should easily handle routine tasks.

Transistors

Transistors are hugely important to electronics in general and to computing in particular. The term is a mix of “transfer resistance” and typically refers to a component made of semiconductors used to limit and/or control the amount of electrical current flowing through a circuit.

In computing, transistors are just as elemental. The transistor is the basic building unit behind the creation of all microchips. Transistors help comprise the CPU, and they’re what makes the binary language of 0s and 1s that computers use to interpret Boolean logic.

The next wave of CPUs


Computer scientists are always working to increase the output and functionality of CPUs. Here are some projections about future CPUs:

  • New chip materials: The silicon chip has long been the mainstay of the computing industry and other electronics. The new wave of processors (link resides outside ibm.com) will take advantage of new chip materials that offer increased performance. These include carbon nanotubes (which display excellent thermal conductivity through carbon-based tubes approximately 100,000 times smaller than the width of a human hair), graphene (a substance that possesses outstanding thermal and electrical properties) and spintronic components (which rely on the study of the way electrons spin, and which could eventually produce a spinning transistor).
  • Quantum over binary: Although current CPUs depend on the use of a binary language, quantum computing will eventually change that. Instead of binary language, quantum computing derives its core principles from quantum mechanics, a discipline that has revolutionized the study of physics. In quantum computing, binary digits (1s and 0s) can exist in multiple environments (instead of in two environments currently). And because this data will live in more than one location, fetches will become easier and faster. The upshot of this for the user will be a marked increase in computing speed and an overall boost in processing power.
  • AI everywhere: As artificial intelligence continues to make its profound presence felt—both in the computing industry and in our daily lives—it will have a direct influence on CPU design. As the future unfolds, expect to see an increasing integration of AI functionality directly into computer hardware. When this happens, we’ll experience AI processing that’s significantly more efficient. Further, users will notice an increase in processing speed and devices that will be able to make decisions independently in real time. While we wait for that hardware implementation to occur, chip manufacturer Cerebras has already unveiled a processor its makers claim to be the “fastest AI chip in the world” (link resides outside ibm.com). Its WSE-3 chip can train AI models with as many as 24 trillion parameters. This mega-chip contains four trillion transistors, in addition to 900,000 cores.

CPUs that offer strength and flexibility


Companies expect a lot from the computers they invest in. In turn, those computers rely upon having a CPUs with enough processing power to handle the challenging workloads found in today’s data-intensive business environment.

Organizations need workable solutions that can change as they change. Smart computing depends upon having equipment that capably supports your mission, even as that work evolves. IBM servers offer strength and flexibility, so you can concentrate on the job at hand. Find the IBM servers you need to get the results your organization relies upon—both today and tomorrow.

Source: ibm.com

Saturday, 8 June 2024

Prioritizing operational resiliency to reduce downtime in payments

Prioritizing operational resiliency to reduce downtime in payments

The average lost business cost following a data breach was USD 1.3 million in 2023, according to IBM’s Cost of a Data Breach report. With the rapid emergence of real-time payments, any downtime in payments connectivity can be a significant threat. This downtime can harm a business’s reputation, as well as the global financial ecosystem.

For this reason, it’s paramount that financial enterprises support their resiliency needs by adopting a robust infrastructure that is integrated across multiple environments, including the cloud, on prem and at the edge.

Resiliency helps financial institutions build customer and regulator confidence


Retaining customers is crucial to any business strategy, and maintaining customer trust is key to a financial institution’s success. We believe enterprises that prioritize resilience demonstrate their commitment to providing their consumers with a seamless experience in the event of disruption.

In addition to maintaining customer trust, financial enterprises must maintain regulator trust as well. Regulations around the world, such as the Digital Operational Resilience Act (DORA), continue to grow. DORA is a European Union regulation that aims to establish technical standards that financial entities and their critical third-party technology service providers must implement in their ICT systems by 17 January 2025.

DORA requires financial institutions to define the business recovery process, service levels and recovery times that are acceptable for their business across processes, including payments. Traditionally, this has caused covered institutions to evaluate their cybersecurity protection measures.

To meet customer and regulator demands, it is critical that financial institutions are proactive and strategic about creating a cohesive strategy to modernize their payments infrastructure with resiliency and compliance at the forefront.

How IBM helps clients address resiliency in payments


As the need for operational resilience grows, enterprises increasingly adopt hybrid cloud strategies to store their data across multiple environments including the cloud, on prem and at the edge. By developing a workload placement strategy based on the uniqueness of a financial entity’s business processes and applications, they can optimize the output of these applications to enable the continuation of services 24/7.

IBM Cloud® remains committed to providing our clients with an enterprise-grade cloud platform that can help them address resiliency, performance, security and compliance obligations. IBM Cloud also supports mission-critical workloads and addresses evolving regulations around the globe.

To accelerate cloud adoption in financial services, we built IBM Cloud for Financial Services®, informed by the industry and for the industry. With security controls built into the platform, we aim to help financial entities minimize risk as they maintain and demonstrate their compliance with their regulators.

With approximately 500 industry practitioners across the globe, the expertise of the IBM Payments Center® provides clients with guidance on their end-to-end payments’ modernization journey. Also, clients can use payments as a service, including checks as a service, which can help give them access to the benefits of a managed, secured cloud-based platform that can scale up and down to meet changing electronic payment and check volumes.

IBM’s swift connectivity capabilities on IBM Cloud for Financial Services enable resiliency and use IBM Cloud multizone regions to help keep data secured and enable business continuity in case of advanced ransomware or cyberattacks.

IBM® can help you navigate the highly interconnected payments ecosystem and build resiliency. Partner with us to reduce downtime, protect your reputation and maintain the trust of your customers and regulators.

Source: ibm.com