Showing posts with label IBM Storage. Show all posts
Showing posts with label IBM Storage. Show all posts

Friday, 6 September 2024

Primary storage vs. secondary storage: What’s the difference?

Primary storage vs. secondary storage: What’s the difference?

What is primary storage?


Computer memory is prioritized according to how often that memory is required for use in carrying out operating functions. Primary storage is the means of containing primary memory (or main memory), which is the computer’s working memory and major operational component. The main or primary memory is also called “main storage” or “internal memory.” It holds relatively concise amounts of data, which the computer can access as it functions.

Because primary memory is so frequently accessed, it’s designed to achieve faster processing speeds than secondary storage systems. Primary storage achieves this performance boost by its physical location on the computer motherboard and its proximity to the central processing unit (CPU).

By having primary storage closer to the CPU, it’s easier to both read and write to primary storage, in addition to gaining quick access to the programs, data and instructions that are in current use and held within primary storage. 

What is secondary storage?


External memory is also known as secondary memory and involves secondary storage devices that can store data in a persistent and ongoing manner. Because they can be used with an interruptible power supply, secondary storage devices are said to provide non-volatile storage.

These data storage devices can safeguard long-term data and establish operational permanence and a lasting record of existing procedures for archiving purposes. This makes them the perfect hosts for housing data backups, supporting disaster recovery efforts and maintaining the long-term storage and data protection of essential files.

How computer memory mimics human memory


To further understand the differences between primary storage and secondary storage, consider how human beings think. Each day, people are mentally bombarded by a startling amount of incoming data.

  • Personal contacts: The average American makes or receives 6 phone calls per day, as well as sends or receives approximately 32 texts.
  • Work data: In addition, most people are also engaged in work activities that involve incoming organizational data via any number of business directives or communiques.
  • Advertising: It’s been estimated that the average person is exposed to as many as 10,000 advertisements or sponsored messages per day. Subtracting 8 hours for an average night’s sleep, that equates to a person being exposed to an advertising message every 5.76 seconds that they’re awake.
  • News: The advertising figure does not include media-conveyed news information, which we’re receiving in an increasing number of formats. In many current television news programs, a single screen is being used to simultaneously transmit several types of information. For example, a news program might feature a video interview with a newsmaker while a scroll at the bottom of the screen announces breaking news headlines and a sidebar showcases latest stock market updates.
  • Social media: Nor does that figure account for the growing and pervasive influence of social media. Through social media websites, messaging boards and online communities, people are absorbing even more data.

Clearly, this is a lot of incoming information to absorb and process. From the moment we awake until we return to sleep, our minds scan through all this possible data, making a near-endless series of minute judgments about what information to retain and what to ignore. In most instances, that decision comes down to utility. If the mind perceives that this information will need to be recalled again, that data is awarded a higher order of mental priority.

These prioritization decisions happen with such rapid frequency that our minds are trained to input this data without truly realizing it, leaving it to the human mind to sort out how primary and secondary memory is allocated. Fortunately, the human mind is quite adept at managing such multitasking, as are modern computers.

An apt analogy exists between how the human mind works and how computer memory is managed. In the mind, a person’s short-term memory is more dedicated to the most pressing and “current” cognitive needs. This might include data such as an access code used for personal banking, the scheduled time of an important medical appointment or the contact information of current business clients. In other words, it’s information of the highest anticipated priority. Similarly, primary storage is concerned with the computer’s most pressing processing needs.

Secondary data storage, on the other hand, offers long-term storage, like a person’s long-term memory. Secondary storage tends to operate with less frequency and can require more computer processing power to retrieve long-stored data. In this way, it mirrors the same retention and processing as long-term memory. Examples of long-term memory for a human could include a driver’s license number, long-retained facts or a spouse’s phone number.

Memory used in primary storage


Numerous forms of primary storage memory dominate any discussion of computer science:

  • Random Access Memory (RAM): The most vitally important type of memory, RAM handles and houses numerous key processes, including system apps and processes the computer is currently managing. RAM also serves as a kind of launchpad for files or apps.
  • Read-Only Memory (ROM): ROM allows viewing of contents but does not allow viewers to make changes to collected data. ROM is non-volatile storage because its data remains even when the computer is turned off.
  • Cache memory: Another key form of data storage that stores data that is often retrieved and used. Cache memory contains less storage capacity than RAM but is faster than RAM.
  • Registers: The fastest data access times of all are posted by registers, which exist within CPUs and store data to achieve the goal of immediate processing.
  • Flash memory: Flash memory offers non-volatile storage that allows data to be written and saved (as well as be re-written and re-saved). Flash memory also enables speedy access times. Flash memory is used in smartphones, digital cameras, universal serial bus (USB) memory sticks and flash drives.
  • Cloud storage: Cloud storage might operate as primary storage, in certain circumstances. For example, organizations hosting apps in their own data centers require some type of cloud service for storage purposes.
  • Dynamic Random-Access Memory (DRAM): A type of RAM-based semiconductor memory, DRAM features a design that relegates each data bit to a memory cell that houses a tiny capacitor and transistor. DRAM is non-volatile memory thanks to a memory refresh circuit inside the DRAM capacitor. DRAM is most often used in creating a computer’s main memory.
  • Static Random-Access Memory (SRAM): Another type of RAM-based semiconductor memory, SRAM’s architecture is based around a latching, flip-flop circuitry for data storage. SRAM is volatile storage that sacrifices its data when power is removed from the system. However, when it is operational, it provides faster processing than DRAM, which often drives SRAM’s price upward. SRAM is typically used within cache memory and registers.

Memory used in secondary storage


There are three forms of memory commonly used in secondary storage:

  • Magnetic storage: Magnetic storage devices access data that’s written onto a spinning metal disk that contains magnetic fields.
  • Optical storage: If a storage device uses a laser to read data off a metal or plastic disk that contains grooves (much like an audio LP), that’s considered optical storage.
  • Solid state storage: Solid state storage (SSS) devices are powered by electronic circuits. Flash memory is commonly used in SSS devices, although some use random-access memory (RAM) with battery backup. SSS offers high-speed data transfer and high performance, although its financial costs when compared to magnetic storage and optical storage can prove prohibitive.

Types of primary storage devices


Storage resources are designated as primary storage according to their perceived utility and how that resource is used. Some observers incorrectly assume that primary storage depends upon the storage space of a particular storage medium, the amount of data contained within its storage capacity or its specific storage architecture. It’s actually not about how a storage medium might store information. It’s about the anticipated utility of that storage media.

Through this utility-based focus, it’s possible for primary storage devices to take multiple forms:

  • Hard disk drives (HDDs)
  • Flash-based solid-state drives (SSDs)
  • Shared storage area network (SAN)
  • Network attached storage (NAS)

Types of secondary storage devices


While some forms of secondary memory are internally based, there are also secondary storage devices that are external in nature. External storage devices (also called auxiliary storage devices) can be easily unplugged and used with other operating systems, and offer non-volatile storage:

  • HDDs
  • Floppy disks
  • Magnetic tape drives
  • Portable hard drives
  • Flash-based solid-state drives
  • Memory cards
  • Flash drives
  • USB drives
  • DVDs
  • CD-ROMs
  • Blu-ray Discs
  • CDs 
Source: ibm.com

Thursday, 16 November 2023

Leveraging IBM Cloud for electronic design automation (EDA) workloads

Leveraging IBM Cloud for electronic design automation (EDA) workloads

Electronic design automation (EDA) is a market segment consisting of software, hardware and services with the goal of assisting in the definition, planning, design, implementation, verification and subsequent manufacturing of semiconductor devices (or chips). The primary providers of this service are semiconductor foundries or fabs.

While EDA solutions are not directly involved in the manufacture of chips, they play a critical role in three ways:

1. EDA tools are used to design and validate the semiconductor manufacturing process to ensure it delivers the required performance and density.

2. EDA tools are used to verify that a design will meet all the manufacturing process requirements. This area of focus is known as design for manufacturability (DFM).

3. After the chip is manufactured, there is a growing requirement to monitor the device’s performance from post-manufacturing test to deployment in the field. This third application is referred to as silicon lifecycle management (SLM).

The increasing demands for computers to match the higher fidelity simulations and modeling workloads, more competition, and the need to bring products to market faster mean that EDA HPC environments are continually growing in scale. Organizations are looking at the best leveraging technologies—such as accelerators, containerization and hybrid cloud—to gain a competitive computing edge.

EDA software and integrated circuit design

Electronic design automation (EDA) software plays a pivotal role in shaping and validating cutting-edge semiconductor chips, optimizing their manufacturing processes, and ensuring that advancements in performance and density are consistently achieved with unwavering reliability.

The expenses associated with acquiring and maintaining the necessary computing environments, tools and IT expertise to operate EDA tools present a significant barrier for startups and small businesses seeking entry into this market. Simultaneously, these costs remain a crucial concern for established firms implementing EDA designs. Chip designers and manufacturers find themselves under immense pressure to usher in new chip generations that exhibit increased density, reliability, and efficiency and adhere to strict timelines—a pivotal factor for achieving success.

This challenge in integrated circuit (IC) design and manufacturing can be visualized as a triangular opportunity space, as depicted in the figure below:

Leveraging IBM Cloud for electronic design automation (EDA) workloads

EDA industry challenges


In the electronic design automation (EDA) space, design opportunities revolve around three key resources:

1. Compute infrastructure
2. Designers
3. EDA licenses

These resources delineate the designer’s available opportunity space.

For design businesses, the key challenge is selecting projects that promise the highest potential for business growth and profitability. To expand these opportunities, an increase in the pool of designers, licenses or compute infrastructure is essential.

Compute infrastructure

To expand computing infrastructure on-premises, extensive planning and time are required for the purchase, installation, configuration and utilization of compute resources. Delays may occur due to compute market bottlenecks, the authorization of new data center resources, and the construction of electrical, cooling and power infrastructure. Even for large companies with substantial on-premises data centers, quickly meeting the demand for expanded data centers necessitates external assistance.

Designers

The second factor limiting realizable opportunities is the pool of designers. It is a challenge to hire designers, being highly skilled engineers, swiftly. The educational foundation required for design takes years to establish, and it often takes a year or more to effectively integrate new designers into existing design teams. This makes designers the most inelastic component on the left side of the figure, constraining business opportunities.

EDA licenses

Lastly, EDA licenses are usually governed by contracts specifying the permissible quantities and types of tools a firm can use. While large enterprises may explore enterprise licensing contracts that are virtually unlimited, they are prohibitively expensive for startups and small to medium-sized design firms.

Leveraging cloud computing to speed time to market


Firms (irrespective of size) aiming to expand their business horizons and gain a competitive edge in terms of time-to-market can strategically leverage two key elements to enhance opportunities: cloud computing and new EDA licensing.

The advent of cloud computing enables the rapid expansion of compute infrastructure by provisioning or creating new infrastructure in public clouds within minutes, in contrast to the months required for internal infrastructure development. EDA software companies have also started offering peak-licensing models, enabling design houses to utilize EDA software in the cloud under shorter terms than traditional licensing contracts.

Leveraging cloud computing and new EDA licensing models, most design houses can significantly expand their business opportunity horizons. The availability of designers remains an inelastic resource; however, firms can enhance their design productivity by harnessing the automation advantages offered by EDA software and cloud computing infrastructure provisioning.

How IBM is leading EDA


In conjunction with IBM’s deep expertise in semiconductor technology, data, and artificial intelligence (AI), our broad EDA and HPC product portfolio encompasses systems, storage, AI, grid, and scalable job management. Our award-winning storage and data transfer products—such as IBM Storage Scale, IBM Spectrum LSF and IBM Aspera—have been tightly integrated to deliver high-performance parallel storage solutions and large-scale job management across multiple clusters and computing resources.

IBM Cloud EDA infrastructure offers foundry-secure patterns and environments, supported by a single point of ownership. EDA firms can quickly derive value from secure, high-performance, user-friendly cloud solutions built on top of IBM’s industry-leading cloud storage and job management infrastructure.

In the coming months, IBM technical leaders will publish a white paper highlighting our unique capability to offer optimized IBM public cloud infrastructure for EDA workloads, serving both large and small enterprise customers.

Source: ibm.com

Tuesday, 15 February 2022

Redefine cyber resilience with IBM FlashSystem

IBM FlashSystem, IBM Exam Prep, IBM Learning, IBM Career, IBM Skills, IBM Jobs

Today, we’re announcing new data resilience capabilities for the IBM FlashSystem family of all-flash arrays to help you better detect and recover quickly from ransomware and other cyberattacks. We’re also announcing new members of the FlashSystem family with higher levels of performance to help accommodate these new cyber resilience capabilities alongside production workloads.

Cybercrime continues to be a major concern for business. Almost every day we see reports of new attacks. The average cost is $4.24 million, and recovery can take days or weeks. Cyberattacks have both an immediate impact on business but can also have a lasting reputational impact if the business is unavailable for a long time.

How Cyber Vault Can Help Businesses Recover Rapidly

Even with the best cyberattack defense strategy, it’s possible that an attack could bypass those defenses. That’s why it’s essential for businesses to have both defense and recovery strategies in place. Storage plays a central role in recovering from an attack.

IBM Safeguarded Copy, announced last year, automatically creates point-in-time snapshots according to an administrator-defined schedule. These snapshots are designed to be immutable (snapshots cannot be changed) and protected (snapshots cannot be deleted except by specially defined users). These characteristics help protect the snapshots from malware or ransomware and from disgruntled employees. The snapshots can be used to quickly recover production data following an attack.

Recovery from an attack involves three major phases: detection that an attack has occurred, preparing a response to the attack, and recovery from the attack. Each of these phases can take hours or longer, contributing to the overall business impact of an attack.

An offering implemented by IBM Lab Services, IBM FlashSystem Cyber Vault is designed to help speed all phases of this process. Cyber Vault runs continuously and monitors snapshots as they are created by Safeguarded Copy. Using standard database tools and other software, Cyber Vault checks Safeguarded Copy snapshots for corruption. If Cyber Vault finds such changes, that is an immediate sign an attack may be occurring. IBM FlashSystem Cyber Vault is based on a proven solution already used by more than 100 customers worldwide with IBM DS8000 storage.

When preparing a response, knowing the last snapshots with no evidence of an attack speeds determining which snapshot to use. And since Safeguarded Copy snapshots are on the same FlashSystem storage as operational data, recovery is fast using the same snapshot technology. Cyber Vault automation helps speed the process of recovery further. With these advantages, FlashSystem Cyber Vault is designed to help reduce cyberattack recovery time from days to just hours.

IBM FlashSystem Cyber Vault is part of IBM’s comprehensive approach to data resilience: high availability and remote replication for disaster recovery in IBM FlashSystem. Backup, recovery, and copy management using IBM Spectrum Protect Suite. Ultra-low-cost long term storage with physical air gap protection with IBM tape storage. Early attack detection through IBM QRadar and IBM Guardium. And proactive attack protection using IBM Safeguarded Copy.

High Performance Hybrid Cloud Storage Systems

To ensure cyber security does not have to come at the expense of production workload efficiency, IBM is introducing new storage systems with greater performance than previous systems.

Built for growing enterprises needing the highest capability and resilience, IBM FlashSystem 9500 offers twice the performance, connectivity, and capacity of FlashSystem 9200 and up to 50% more cache (3TB). The system supports twice as many (48) high-performance NVMe drives. Likewise,FlashSystem 9500 supports up to forty-eight 32Gbps Fibre Channel ports with planned support for 64Gbps Fibre Channel ports. There’s also an extensive range of Ethernet options, including 100GbE RoCEv2.

IBM FlashSystem, IBM Exam Prep, IBM Learning, IBM Career, IBM Skills, IBM Jobs

Supported drives include new IBM FlashCore Modules (FCM 3) with improved hardware compression capability, Storage Class Memory drives for ultra-low latency workloads, or industry standard NVMe flash drives. FCMs allow 2.3PB effective capacity with DRAID6 per control enclosure and 4.5PB effective capacity with forty-eight 38TB FCMs in a planned future update. These new FCM 3 drives help reduce operational cost with a maximum of 116TB per drive and an impressive 18PB of effective capacity in only 16U of rack space with FlashSystem 9500. FCM 3 drives are self-encrypting and are designed to support FIPS 140-3 Level 2 certification, demonstrating that they meet rigorous security standards as defined by US government.

FlashSystem 9500 also provides rock solid data resilience with numerous safeguards including multi-factor authentication designed to validate users and secure boot to help ensure only IBM authorized software runs on the system. Additionally, IBM FlashSystem family offers two- and three-site replication plus plus configuration options that can include an optional 100% data availability guarantee to help ensure business continuity.

“In our beta testing, FlashSystem 9500 with FlashCore Module compression enabled showed the lowest latency we have seen together with the efficiency benefit of compression. FlashSystem 9500 delivers the most IOPS and throughput of any dual controller system we have tested and even beat some four-controller systems.”

— Technical Storage Leader at a major European Bank.

New IBM FlashSystem 7300 offers about 25% better performance than FlashSystem 7200, supports FCM 3 with improved compression, and supports 100GbE ROCEv2. With 24 NVMe drives, it supports up to 2.2PB effective capacity per control enclosure.

IBM FlashSystem, IBM Exam Prep, IBM Learning, IBM Career, IBM Skills, IBM Jobs

For customers seeking a storage virtualization system, new IBM SAN Volume Controller engines are based on the same technology as IBM FlashSystem 9500 and so deliver about double the performance and connectivity of the previous SVC engine. SAN Volume Controller is designed for storage virtualization and so does not include storage capacity but is capable of virtualizing over 500 different storage systems from IBM and other vendors.

IBM FlashSystem, IBM Exam Prep, IBM Learning, IBM Career, IBM Skills, IBM Jobs

Like all members of the IBM FlashSystem family, these new systems are designed to be simple to use in environments with mixed deployments that may require multiple different systems at the core, cloud, or at the edge. They deliver a common set of comprehensive storage data services using a single software platform provided by IBM Spectrum Virtualize. Hybrid cloud capability consistent with on-prem systems is available for IBM Cloud, AWS, and Microsoft Azure with IBM Spectrum Virtualize for Public Cloud. These systems also form the foundation of IBM Storage as a Service.

Source: ibm.com

Saturday, 23 October 2021

Build a simpler, more resilient hybrid cloud with IBM Storage

IBM Hybrid Cloud, IBM Storage, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Tutorial and Materials, IBM Learning, IBM Jobs, IBM Skills, IBM Exam

It’s clear that businesses are embracing hybrid cloud. Indeed, a recent report from the IBM Institute for Business Value states that 97% of business are piloting, implementing or integrating cloud in their operations.

However, as hybrid cloud environments become the norm, businesses must contend with additional IT complexity, public cloud costs and threats from cyberattack and other data destructive events.

Today, IBM® is announcing new capabilities and integrations designed to help organizations reduce IT complexity, deploy cost-effective solutions and improve data and cyber resilience for hybrid cloud environments.

Extending hybrid cloud storage simplicity to Microsoft Azure

One way to reduce hybrid cloud complexity is to ensure consistent function, APIs, management, and user interface across on-premises and public cloud platforms. This can be accomplished with software-defined storage (SDS). That’s where IBM Spectrum® Virtualize for Public Cloud comes in. It’s the cloud-based counterpart of the software in IBM FlashSystem® and SAN Volume Controller.

IBM Spectrum Virtualize for Public Cloud provides the same storage functionality in the cloud as you find on-premises, which makes it easy to implement hybrid cloud storage scenarios such as disaster recovery, cloud DevOps, and data migration. And since it provides this same function across clouds, it also makes it easy to use multiple clouds or to move from cloud to cloud.

Now we’re extending our cloud support to Microsoft Azure in addition to IBM Cloud® and Amazon Web Services (AWS).

On Azure, IBM Spectrum Virtualize for Public Cloud supports IBM Safeguarded Copy, which automatically creates isolated immutable snapshot copies designed to be inaccessible by software – including malware – and which can be used to recover on-premises or cloud data quickly in the event of a data destructive event.

Expediting Turbonomic integration for automated operations

IBM recently acquired Turbonomic, an Application Resource Management (ARM) and Network Performance Management (NPM) software provider. By acquiring Turbonomic, IBM is the only company that will be able to provide customers with AI-powered automation capabilities that span from AIOps to application and infrastructure observability.

IBM and Turbonomic plan to deliver rapid benefits for customers of IBM FlashSystem by improving application performance awareness and automation.

◉ Turbonomic will collect information from IBM FlashSystem storage including storage capacity, IOPS, and latency for each storage array.

◉ Turbonomic’s analysis engine combines FlashSystem data, virtualization data and application data to continuously automate non-disruptive actions and ensure applications get the storage performance they require.

This can eliminate the need for unnecessary over-provisioning and safely increase density without sacrificing performance. On average, customers can increase density by 30% without any application performance impact.

For environments using Instana®, Red Hat® OpenShift®, or any major hypervisor (such as VMware vSphere) with IBM FlashSystem, Turbonomic will observe the entire stack from application to array. This enables all operations teams to quickly visualize and automate corrective actions to mitigate performance risk caused by resource congestion, while safely increasing density.

Other key storage enhancements

IBM Hybrid Cloud, IBM Storage, IBM Exam Prep, IBM Certification, IBM Preparation, IBM Tutorial and Materials, IBM Learning, IBM Jobs, IBM Skills, IBM Exam
We’re continuing to enhance the data and cyber resilience capabilities of our storage platform to help customers combat the threat of ransomware and other data destructive events. Enhancements include:

IBM Spectrum Protect Plus offers a suite of enhancements specifically designed for Red Hat OpenShift and Kubernetes to support data protection for containerized environments. These include Red Hat certification, support for OpenShift workloads deployed on Azure and direct backup to S3 object storage.

IBM Spectrum Protect now supports replicating backup data to additional data protection servers.Additionally, IBM Spectrum Protect now supports using object storage for long-term data retention to reduce the cost of backup.

IBM Spectrum Scale global data fabric gains a new high-performance S3 object interface. This means that cloud native S3 applications can provide faster results without the typical delay for object storage. Additionally, a new GPU direct storage (GDS) interface enables NVIDIA applications to run up to 100% faster with IBM Spectrum Scale.

IBM Elastic Storage® System 3200 now includes a 38TB IBM FlashCore® Module. This new FlashCore Module is double the size of the previous, largest option, doubling the capacity of an ESS 3200 to 912TB in only 2 rack units.

Source: ibm.com

Thursday, 23 September 2021

IBM rolls out Spectrum Fusion HCI: all-in-one cloud native storage

IBM Spectrum Fusion HCI, IBM Tutorial and Material, IBM Exam Prep, IBM Learning, IBM Career, IBM Guides, IBM Study Material

Achieving a state of digital modernity with cloud-native applications requires a shift in IT investment strategies with a renewed focus on speed and flexibility, across the entire enterprise from the edge to the core to the cloud.

And for an increasing number1 of organizations around the world, that means a sophisticated converging of containers and infrastructure. Specifically, advanced hyperconverged systems optimized for helping companies create new applications as microservices, deploying them in containers and managing them with Kubernetes, is the path to the modernized enterprise.

The new IBM Spectrum® Fusion Hyper-Converged Infrastructureannounced in April and made generally available today – is just such a system. The enterprise-grade turn-key Spectrum Fusion HCI is designed from the ground up to streamline container development and enable and ease access to the hybrid cloud.

It combines Red Hat® OpenShift® with integrated compute, storage, networking, and services that is ready from the factory to build, manage, deploy, and concurrently run containerized applications with sustained high performance and pre-built security.

IBM Spectrum Fusion HCI can connect with other geographically dispersed systems based on Red Hat OpenShift technology creating a vast infrastructure network to run containers everywhere. As this network scales, working with Kubernetes can be complex as organizations often face multiple consoles, user interfaces, logins and more.

To address this challenge, IBM Spectrum Fusion HCI simplifies management by providing unified visibility and control to run all your Kubernetes environment from a single management point, across local resources, remote data centers and hybrid cloud environments. This allows organizations to scale applications from development to production without complexity, helping enable the fast deployment of applications and reduce management costs.

According to the “2020 Red Hat Global Customer Tech Outlook” report2, Artificial Intelligence (AI) represents one of the top emerging workloads across hybrid cloud deployments. IBM Spectrum Fusion HCI supports GPU applications with NVIDIA A100 GPU servers to help organizations simplify and accelerate AI, compute-intensive Machine Learning (ML) workloads for data scientists, and inferencing tasks across datacenters, edge, and public clouds.

Backup of application data as well as systems metadata is a critical requirement. IBM Spectrum Fusion HCI integrates with IBM Spectrum Protect Plus for OpenShift to provide complete backup of the system. The backup feature allows administrators to take the entire system backup as well as granular application-level backup with customized backup intervals. The application-level backup can be stored locally leveraging the storage snapshot capability or externally using IBM Spectrum Protect Plus vSnap server’s wide range of target platforms ranging from block storage to S3 compliant object storage.

IBM Spectrum Fusion HCI, IBM Tutorial and Material, IBM Exam Prep, IBM Learning, IBM Career, IBM Guides, IBM Study Material

As customers move to hybrid cloud deployments, containers play a key role in portability and consistency across different environments. IBM Spectrum Fusion HCI is the right all-in-one solution that makes containers easy to build, easy to manage, easy to integrate and easy to run; helping clients improve operational agility by enabling the rapid delivery of cloud-native applications with ubiquitous access to data from edge-to-core-to-the-cloud.

And we’re not stopping there. In early 2022, we’ll release a software defined-only version of Spectrum Fusion that organizations can run on any system with Kubernetes and on any cloud. In these times of unprecedented change, organizations need systems like Spectrum Fusion that have the agility to reach the market faster and adapt more quickly to disruptions.

Source: ibm.com

Saturday, 11 September 2021

IBM ships new LTO 9 Tape Drives with greater density, performance, and resiliency

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Career, IBM Certification, IBM Study Materials

As data generation continues to explode around the world with some researchers suggesting a doubling of the ‘digital universe’ to more than 180 zettabytes by 2025, increasing pressure is being placed upon the administrators responsible for storing, managing, and securing that data.

To help enterprises contend with the challenge, IBM, which has been innovating in data storage for seven decades, announced today the general availability of the industry’s first magnetic tapes and drives that can store an unprecedented 45TB of compressed data on a single cartridge (18TB uncompressed). The new drive and tape are based on the new Ultrium LTO-9 specification and designed to provide organizations greater access, performance and resiliency for data stored on-prem, in the cloud, or at the edge.

In addition to the 50% capacity boost from its predecessor, LTO-8, which supports 12TB of data (30TB compressed), the new IBM LTO-9 Tape Drive, which comes in three models, the F9C (Fibre Channel), F9S (Fibre Channel), and S9C (SAS), features several key performance improvements over LTO-8. For example, the new drives support data transfer rates of up to 400 MB/s for full high and 300 MB/s for half high cartridges – an 11% boost from the previous generation.

The new drives also feature IBM’s new Open Recommended Access Order (oRAO), a new data retrieval accelerator that enables applications to retrieve data from tapes with dramatically reduced seek time between files. Specifically, oRAO, which can be used with compressed or uncompressed data, can reduce those access times by a whopping 73%. Developed from IBM file access acceleration technology, oRAO can also speed cyber resilience response times by shortening the time needed to recover data.

Building Up Cyber Resiliency with IBM LTO-9

The full-height IBM LTO-9 Tape Drive is designed to natively support data encryption, with core hardware encryption and decryption capabilities resident in the tape drive itself to ensure data privacy and reduce the risk of data corruption due to virus or sabotage.

According to a recent security report, from 2020 to 2021 the average total cost of a data breach increased by nearly 10% year over year, the largest single year cost increase in the last seven years. Today, ransomware is one of the costlier types of breaches, with an average cost of $4.62M per breach and one of the most common, with cybersecurity firm, SonicWall, reporting ransomware attacks rose to 304.6 million in 2020, up 62% over 2019.

In other words, ransomware is here to stay for the foreseeable future. It is no longer a matter of if your organization will be attacked, but when and how often. Looking to limit the impact of cyberattacks, the new IBM LTO-9 tapes and drives enable organizations to create cost-effective, cyber resilience strategies.

◉ The cost-effective data backup

Tape backups allow you to safely recover from a ransomware attack, helping you avoid expensive ransom and other fees. IBM tape solutions are also extremely cost-beneficial, costing less than 1 cent per GB per month, exactly 0.59¢/GB, in other words, $5.89 / TB. Also, by implementing an IBM LTO-9 tapes and drives, companies can store up to 1.04EB of compressed data per 18-frame tape library and up to 39PB of compressed data in a 10-sq-ft tape library with LTO Ultrium 9 tape cartridges.

Additionally, customers can reduce their Total Cost of Ownership of their tape library up to 39% by swapping in LTO-9 technology over LTO-8. And remember, tape technology does not add extra charges to retrieve your data.

◉ The best physical air-gap between your data and cyber criminals

Most organizations have a cyber recovery plan that relies on data backups. The best practice in this situation is create a physical “air-gap” to ensure the backup is going to a system that is secure and offline. Utilizing tape storage is the ideal way to provide customers with that physical gap. Tapes are portable, and can be easily stationed in remote, offline locations for superior protection from natural or manmade threats. When the new IBM LTO Ultrium 9 Data Cartridge is removed from the tape drive or library they are physically “air-gapped” greatly reducing the risk of cyber sabotage.

◉ Anti-corruption: tape provides data immutability with WORM capabilities

IBM Exam Prep, IBM Tutorial and Material, IBM Preparation, IBM Career, IBM Certification, IBM Study Materials
The IBM LTO-9 Ultrium WORM data cartridge model stores data in a non-erasable, non-rewritable format to prevent overwriting and reduce the risk of data loss due to human error.

Evaluating 10-year cyber security plans should consider IBM Tape Storage to keep critical data backed up, immutable with WORM data cartridges, and encrypted behind air gap protection to prevent blackmail. In case an attack occurs and restoring your entire storage is required, a clean copy of the data on IBM LTO-9 tape technology is likely to be the cheapest and most reliable recovery option without extra retrieval fees to a cloud provider.

As well as helping you protect against a malware or ransomware event, the WORM capabilities are often essential to meet regulatory and legal compliance across many industries and for publicly traded companies. With the immutability of LTO-9 WORM data cartridges, customers can be assured their data will always be available for audits, legal issues, and financial compliance.

Limit your exposure to malware and ransomware attacks with IBM LTO-9 tape storage.

Source: ibm.com

Monday, 2 August 2021

Boost cyber resilience and more with IBM Storage

IBM Exam Prep, IBM Preparation, IBM Tutorial and Materials, IBM Career, IBM Study Materials, IBM Learning

When we speak with our customers about their storage concerns, we find that cybersecurity, cost and hybrid cloud deployment are often at the top of their minds. Today we are announcing new cyber resilience capabilities and a flexible consumption model to address these needs.

Improve cyber resilience with Safeguarded Copy for IBM FlashSystem

Cybercrime continues to be a major concern for business. According to one estimate, it’s likely a company will be the target of a cyberattack every 11 seconds and the total cost of these attacks could exceed USD 6 trillion in 2021 alone. Almost every day we see reports of new attacks.

Traditional approaches to data protection work well for their intended purposes but aren’t adequate to protect against cyberattacks, which may encrypt or otherwise corrupt your data. Remote replication for disaster recovery will replicate all changes — malicious or not — to the remote copy. And data stored on offline media or the cloud may take too long to recover from a widespread attack. Recovery has taken some businesses days or weeks of downtime.

What’s needed is a solution that combines the protection of offline copies with the speed of local copies.

What IBM offers

The new Safeguarded Copy function for IBM FlashSystem® and IBM SAN Volume Controller is designed to help businesses recover quickly and safely from a cyberattack, helping reduce recovery to minutes or hours.

Safeguarded Copy automatically creates efficient immutable snapshots according to a schedule. These snapshots are stored specially by the system and cannot be connected to servers, creating a logical “air gap” from malware or other threats. They also cannot be changed and cannot be deleted except according to a pre-planned schedule, which helps protect against errors or actions of unhappy staff.

IBM Exam Prep, IBM Preparation, IBM Tutorial and Materials, IBM Career, IBM Study Materials, IBM Learning
In the event of an attack, our orchestration software IBM Copy Services Manager helps you identify the best Safeguarded backup to use and automates the process to restore data to online volumes. Because a restore action uses the same snapshot technology, it is almost instantaneous — much faster than using offline copies or copies stored in the cloud.

With storage virtualization in FlashSystem and IBM SAN Volume Controller you can extend the benefits of Safeguarded Copy to over 500 different storage systems from IBM and all major vendors.

“Keeping our data secure and available is paramount,” says Alexander Würflinger, CIO, Archdiocese Salzburg. “As an IBM FlashSystem user, we are eager to deploy Safeguarded Copy to ensure we always have a fully protected and tamper-proof data copy if needed to quickly recover from any form of attack.”

Detecting a threat before it starts can help speed recovery even more. IBM Security QRadar® is a security information and event management (SIEM) and threat management system that monitors activities looking for signs that may indicate the start of an attack such as logins from unusual IP addresses or outside business hours. Now IBM QRadar can proactively invoke Safeguarded Copy to create a protected backup at the first sign of a threat.

Increase storage flexibility with IBM Storage as a Service

As businesses increasingly deploy storage in a hybrid cloud model, there’s been growing demand for more flexible storage options. Consumers want to deploy cloud storage as and when they need it without having to plan or complete procurement cycles. They don’t want to worry about maintaining storage and instead prefer to have the entire lifecycle managed for them, and they want a similar approach for their data that isn’t stored in public cloud.

Demand is so great that International Data Corporation (IDC) predicts that by 2024, more than half of data center infrastructure will be consumed and operated using this as-a-service model.

What IBM offers

IBM Storage as a Service, part of IBM’s flexible infrastructure initiative, extends your hybrid cloud experience with a new flexible consumption model enabled for both your on-premises and hybrid cloud infrastructure needs.

Getting started is easy. Pick the storage performance tier you need, an initial capacity and a length of commitment. Then IBM delivers the IBM FlashSystem storage that meets your needs with 50% extra capacity to allow for plenty of growth. Anytime you need to use storage, you can deploy in under 10 minutes from installed capacity. And as you use storage, IBM automatically delivers more once you’ve used 75% of the installed capacity.

Plus, you pay for only what you use. Billing is based on physical capacity, so you get all the benefits of data reduction and there’s no penalty for growth above the base commitment or for data that doesn’t compress well. It’s designed for OPEX financial treatment.

IBM Storage as a Service comes with all the rich functional capabilities of IBM FlashSystem — including the new Safeguarded Copy function and our optional 100% availability guarantee — and is ready for hybrid cloud deployments. IBM Spectrum Virtualize for Public Cloud provides a consistent storage experience whether on premises or in the cloud, and customers can leverage Equinix cloud-adjacent colocation services.

“In our business, data generation is a constant and maintaining its integrity an imperative,” said Dave Anderson-Ward at Ordnance Survey, Great Britain’s national mapping agency. “We’re always looking for the most efficient ways to securely store and manage these growing volumes. The new IBM Storage as a Service solution is something to consider because it would give us the benefit of maintaining our data on premises, while paying for capacity, as we use it. In addition, with IBM maintaining the system, we can focus on the data.”

Customers also receive concierge service from an IBM Technical Account Manager. They help you get started, monitor your environment and advise on best practices you should adopt. They also help coordinate IBM resources to speed resolution if you need to open a ticket.

IBM Storage as a Service is an easy, flexible way to get IBM FlashSystem storage with the benefits of subscription pricing and complete lifecycle support from IBM.

Deliver better performance and availability with new mainframe storage models

According to the CIO Tech Poll: Tech Priorities, 2021 research, 46% of IT leaders are planning to increase spending in artificial intelligence (AI) and machine learning over the next 12 months.

Remember: AI and big data projects have immensely high data-usage requirements for storage. To support investments in these technologies, businesses will need the right storage infrastructure.

What IBM offers

IBM is announcing a new generation of Storage for IBM Z Systems designed to deliver better performance and availability. That improved performance provides headroom for new AI and machine learning workloads.

First, IBM DS8980F delivers ten times better availability: now “seven nines” or an average of only 3 seconds downtime per year. The system also has more than double maximum cache and provides 2x greater bandwidth capacity with 32 Gb/s fibre channel host adapters. Together these enhancements result in a 25% response time reduction for a mainframe database-like workload, which helps you add functions such as AI to extract more value from your data on IBM Z.

The new IBM TS7770 features a flash cache that delivers the same performance in only one drawer as ten drawers of disk drives, enabling you to reduce rack space needs. The new TS7770, DS8910F, and networking can now all be combined into a single rack, which reduces floorspace by 50% compared with the systems separately.

Source: ibm.com

Tuesday, 20 July 2021

How to choose the right IT consumption model for your storage

IBM Exam Study, IBM Preparation, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Career, IBM Learning, IBM

Evolving business models and IT strategies are quickly changing the way businesses consume and pay for their data storage. The adoption of hybrid cloud has spurred growing demand for consumption-based IT as an alternative to traditional cash purchases and leases.

In response, many vendors are offering flexible consumption or “pay-per-use” pricing models and subscriptions that bring cloud-like economics to the data center and hybrid cloud. By 2026, the global Storage as a Service market is projected to reach USD 100.21 billion – up from USD 11.63 billion in 2018 – according to Verified Market Research.

With so many deployment types and IT consumption models suddenly available, it can be difficult to know which one is right for your storage strategy. In this blog post, we’ll outline the main types so that you can make an informed investment decision.

What is consumption-based pricing?

Consumption-based pricing refers to products and services with usage-based billing, meaning you pay only for the capacity you need as you consume it. These models can help you save money by replacing high upfront capital expenses with predictable quarterly charges aligned directly with your changing business needs. The idea is that you can quickly scale by consuming storage capacity instantly, provisioning the resources up or down as needed. Many variations exist, and most programs have a minimum term and/or capacity requirement. 

Types of deployment and IT consumption models

Many vendors today offer choices in storage consumption models and financing. Having this flexibility of choice will help you to modernize and scale your workloads for future business needs. Common deployment and IT consumption models for storage include:

◉ Traditional purchase model. Most organizations continue to keep select workloads on premises to meet security, compliance and performance requirements. On-premises infrastructure, such as storage, has traditionally been an upfront or leased capital expense in which you purchase infrastructure that is deployed in your data center and will meet your maximum capacity requirements. But budgeting for on-premises infrastructure can be tricky — needs can be difficult to predict, and because procurement cycles for new storage systems can be lengthy, most organizations overprovision their on-premises storage.

◉ Consumption-based (“pay-per-use”) model for on-premises storage. In these models the vendor provides you with storage systems as defined by your requirements with 25% to 200% (level varies greatly by vendor) more “growth” capacity than your immediate needs. You buy, lease or rent a committed level of “base capacity” that equates to your immediate needs, and you then pay for what you use, when you use it, above that level. These models allow you to scale capacity use up or down as your business needs dictate. They usually have terms from 3 to 5 years.

◉ Subscription-based, or Storage as a Service. Like the consumption-based model above, these models have base commitments and pay-for-use above the base commitment level. The big difference is that Storage as a Service (STaaS) is a service offering much like cloud-based services. STaaS provides fast, on-demand capacity in your data center. You pay only for what you use, and the vendor takes care of the lifecycle management (deployment, maintenance, growth, refresh and disposal). The offering will be based on a set of service level descriptions indicating things such as levels of performance and availability with no direct tie to specific technology models and configuration details. The infrastructure still physically resides in your data center, or in a co-location center, but you don’t own it, install it, or maintain it. In addition, you don’t have to worry about procurement cycles for adding capacity or technology refreshes. You gain cloud economics with an OPEX pricing model, combined with the security of on-premises storage and lower management overhead.

◉ Cloud-only approach. Cloud services are readily scalable and can be easily adjusted to meet changing workload demands. Deploying in the public cloud can reduce spending on hardware and on-premises infrastructure because it is also a pay-per-use model. In the perfect utility-based model, you would pay only for what you use, with guaranteed service levels, set pricing and predictable quarterly charges aligned directly with your business needs. Of course, many of today’s clouds do not meet that standard. In addition to charging for the amount of capacity consumed, some cloud storage providers also include charges for the number of accesses and for amount of data transferred out of the cloud, referred to as “egress.”

◉ Hybrid approach. A hybrid approach to storage would integrate a mix of services from public cloud, private cloud and on-premises infrastructure, with orchestration, management and application portability delivered across all three using software-defined management. It can also include consumption-based pricing and subscription-based services for on-premises storage.

Benefits of a flexible consumption model for storage

IBM Exam Study, IBM Preparation, IBM Tutorial and Material, IBM Learning, IBM Prep, IBM Career, IBM Learning, IBM
Now that you know the common types of deployment and IT consumption models, let’s explore a few reasons why consumption-based models are increasingly popular. Benefits of consumption-based pricing for storage include:

◉ Cloud economics – move from CAPEX to OPEX. In a time of shrinking IT budgets, consumption-based pricing allows you to reduce capital spending with predictable monthly OPEX charges, and you pay only for what you use.

◉ Align IT resources and usage. With monitoring included, you’ll be able to understand and more accurately predict your capacity usage for more cost-efficient operations. This means no more overprovisioning or running out of capacity, and instead, you can align spending more closely to the needs of the business.

◉ Gain agility. With consumption-based IT you have extra capacity to provision almost instantly to meet changing business needs. No more delays due to long procurement and vendor price negotiation cycles. You get a cloud-like experience.

◉ Reduce IT complexity. The vendor assumes storage life-cycle management responsibilities, which means your admin staff can focus on higher value tasks. And with consistent data services on-premises and in the cloud, you can improve availability and avoid costly downtime, all while reducing overhead.

◉ Access to the latest innovations in technology. As-a-service models give organizations access to leading storage technology with enterprise-class features for superior performance, high availability and scalability.

Source: ibm.com

Tuesday, 13 July 2021

Data resilience and storage — a primer for your business

Data resilience and storage, IBM Learning, IBM Tutorial and Material, IBM Learning, IBM Exam Prep, IBM Preparation, IBM Career

Data resilience has become increasingly vital to modern businesses. Your ability to protect against and recover from malicious attacks and other outages greatly contributes to your business success. Resilient primary storage is a core component of data resilience, but what is it exactly?

Read on to get answers to important questions about data resilience and to see how resilient primary storage for your data can help your business thrive.

What is data resilience?

Data resilience is the ability to protect against and recover quickly from a data-destructive event, such as a cyberattack, data theft, disaster, failure or human error. It’s an important component of your organization’s overall cyber resilience strategy and business continuity plan.

Keeping your data — and your entire IT infrastructure — safe in the event of cyberattack is crucial. A 2020 report by Enterprise Strategy Group found that 60% of enterprise organizations experienced ransomware attacks in the past year and 13% of those organizations experienced daily attacks. Each data breach, according to the Ponemon Institute, can cost an average of  USD 3.86 million. By 2025, cybercrime costs are estimated to reach USD 10.5 trillion annually, according to Cybersecurity Ventures.

In addition to combating malicious attacks, data resilience is vital to preventing data loss and helping you recover from natural disasters and unplanned failures. Extreme weather events such as floods, storms and wildfires are increasing in number and severity, and affect millions of people and businesses all over the world each year. In 2018, the global economic stress and damage from natural disasters totaled USD 165 billion, according to the World Economic Forum in their 2020 Global Risks Report.

While the first order of business is to prevent data-destructive events from occurring, it’s equally important to be able to recover when the inevitable happens and an event, malicious or otherwise, takes place.

Your preparedness and ability to quickly respond hinges on where you are storing your primary data. Is the solution resilient? Ensuring your data stays available to your applications is the primary function of storage. So, what are the characteristics of resilient primary storage that can help?

5 characteristics of a resilient storage solution

A resilient storage solution provides flexibility and helps you leverage your infrastructure vendors and locations to create operational resiliency – achieving data resilience in the data center and across virtualized, containerized and hybrid cloud environments.

Data resilience and storage, IBM Learning, IBM Tutorial and Material, IBM Learning, IBM Exam Prep, IBM Preparation, IBM Career

Characteristics of resilient primary storage include:

1. 2-site and 3-site replication: capable of traditional 2-site and 3-site replication configurations – on premises, on cloud, or hybrid – using your choice of synchronous or asynchronous data communication. This gives you confidence that your data can survive a localized disaster with very little or no data loss, also known as recovery point objective (RPO).

2. High availability: the ability to gain access to your data quickly, in some cases immediately, which is also known as recovery time objective (RTO). Resilient storage has options for immediate failover access to data at remote locations. Not only does your data survive a localized disaster, but your applications have immediate access to alternate copies as if nothing ever happened.

3. Enhanced high availability: multi-platform support. This means RPO/RTO options available regardless of your choice in primary storage hardware vendors or public cloud providers.

4. Immutable copy: making copies that are logically air-gapped from the primary data, and further making that copy unchangeable, or immutable, in the event your primary data copy becomes infected.

5. Encryption: protecting your data from bad actors and guarding against prying eyes or outright data theft.

How can I ensure my organization has data resilience?


Many organizations have a mix of different on-premises storage vendors or have acquired storage capacity over time, meaning they have different generations of storage systems. Throw in some cloud storage for a hybrid environment and you may find it quite difficult to deliver a consistent approach to data resilience.

A first step is modernizing the storage infrastructure you already have. Fortunately, this is not something that requires you wait for a lease to expire or for data growth to drive a new hardware purchase. You can get started right away with software-defined storage from IBM on your existing storage from most any vendor.

IBM FlashSystem® and IBM SAN Volume Controller, both built with IBM Spectrum Virtualize software, will include a Safeguarded Copy function that creates immutable (read-only) copies of your data to protect against ransomware and other threats. This functionality is also available on IBM Storage for mainframe systems.

Additionally, you can combine the data resilience capabilities of IBM FlashSystem and IBM Spectrum® Protect Plus to create a highly resilient IT infrastructure for on-premises, cloud and containerized environments. IBM Spectrum Protect Plus is available at a special rate when purchasing a FlashSystem 5000 or 5200.

Source: ibm.com

Wednesday, 7 July 2021

Making storage simple for containers, edge and hybrid cloud

IBM Exam Preparation, IBM Tutorial and Material, IBM Career, IBM Learning, IBM Prep, IBM Learning, IBM Guides

As more companies turn to hybrid clouds to fuel their digital transformation, the need to ensure data accessibility across the enterprise—from the data center to the edge—grows.

Often geographically dispersed and disconnected from the data center, edge computing can strand vast amounts of data that could be otherwise brought to bear on analytics and AI. According to a recent report from IDC, the number of new operational processes deployed on edge infrastructure will grow from less than 20% to over 90% in 2024 as digital engineering accelerates IT/OT convergence.

IBM is taking aim at this challenge with several innovative storage products. Announced today, IBM Spectrum® Fusion is a container-native software defined storage (SDS) solution that fuses IBM’s trusted general parallel file system technology (IBM Spectrum® Scale) and its leading data protection software (IBM Spectrum® Protect Plus). This integrated product simplifies data access and availability from the data center to the edge of the network and across public cloud environments.

In addition, we announced the new IBM Elastic Storage® System 3200, an all-flash controller storage system, equipped with IBM Spectrum Scale. The new 2U model offers 100% more performance than its predecessor and up to 367TB of capacity per node.

We are committed to helping customers propel their transformations by providing solutions that make it easier to discover, access and manage data across their increasingly complex hybrid cloud environments. Today’s announcements are testaments to this strategy.

IBM Spectrum Fusion

IBM Spectrum Fusion is a hybrid cloud container-native data solution for Red Hat® OpenShift® and Red Hat OpenShift Data Foundation (formerly known as Red Hat OpenShift Container Storage). It “fuses” the storage platform with storage services and is built on the market-leading technology of IBM Spectrum Scale with advanced file management and global data access.

IBM Exam Preparation, IBM Tutorial and Material, IBM Career, IBM Learning, IBM Prep, IBM Learning, IBM Guides
IBM Spectrum Fusion will be offered in two iterations: a hyperconverged infrastructure (HCI) system, due in the second half of 2021, and an SDS software solution, due in 2022.

The HCI edition will be the industry’s first container-centric, hyperconverged system. Although competitive HCI systems support containers, most are VM-centric. IBM Spectrum Fusion will come out of the box, built for and with containers, running on Red Hat OpenShift. Characteristics of IBM Spectrum Fusion HCI include:

◉ Integrated HCI appliance for both containers and VMs using Red Hat OpenShift

◉ Global data access with active file management (AFM)

◉ Data resilience for local and remote backup and recovery

◉ Simple installation and maintenance of hardware and software

◉ Global data platform stretching from public clouds to on-premises or edge locations

◉ IBM Cloud® Satellite and Red Hat ACM integration

◉ Starts small with 6 servers and scales up to 20 (with NVIDIA HPC GPU enhanced options)

“IBM Spectrum Fusion HCI will provide our customers with a powerful container-native storage foundation and enterprise-class data storage services for hybrid cloud and container deployments,” said Bob Elliott, Vice President Storage Sales, Mainline Information Systems. “In today’s world, our customers want to leverage their data from edge to core data center to cloud and with IBM Spectrum Fusion HCI our customers will be able to do this seamlessly and easily.”

In 2022, IBM Spectrum Fusion will also be available as a stand-alone software-defined storage solution.

Next-generation storage built for high-performance AI and hybrid cloud

IBM Exam Preparation, IBM Tutorial and Material, IBM Career, IBM Learning, IBM Prep, IBM Learning, IBM Guides
IBM is also introducing the highest-performing scale-out file system node ever released for IBM Spectrum Scale. Including advanced file management and global data access, the new Elastic Storage System (ESS) 3200 is a two-rack unit enclosure with all-NVMe flash that delivers 80GB/s throughput, 100% faster than the previous ESS 3000 model.

“IBM’s newest member for enterprise-class storage offerings, the ESS 3200 with IBM Spectrum Scale, provides a faster, reliable data platform for HPC, AI/ML workloads enabling my clients to expedite time to results.” – John Zawistowski, Global Systems Solution Executive, Sycomp

This solution is designed to be easy to deploy and can start at 48TB configurations and scale up to 8YB (yottabytes) of global capacity in a single global name space seamlessly spanning edge, core data center and hybrid cloud environments. With options for 100Gbps ethernet or 200Gbps InfiniBand, this system is designed for the most demanding high-performance enterprise, analytics, big data and AI workloads.

Source: ibm.com

Thursday, 1 July 2021

Mission-critical efficiency and resilience at a reduced IT cost

IBM Exam Prep, IBM Tutorial and Material, IBM Career, IBM Guides, IBM Learning, IBM Preparation

The COVID-19 pandemic has impacted business at a global scale. Organizations face new challenges to ensure sustained performance, seamless integration to hybrid cloud environments, data privacy and continuous delivery of business operations and services. At the same time, they need to find new ways to significantly reduce costs and complexity. Given this environment, organizations are embracing digital transformation to allow them to meet customer expectations and satisfy the needs of new markets more efficiently.

IBM Exam Prep, IBM Tutorial and Material, IBM Career, IBM Guides, IBM Learning, IBM Preparation
Amidst all the changes that we have experienced in technology because of the pandemic, IBM Z® and IBM DS8000 have remained the most stable and secure platform to support the most demanding IT workloads for mission-critical applications. Together, they provide a trusted foundation to elevate the customer experience while delivering the reliability and security expectations of today’s world.

IBM z15 Model T02 is the newest entry model into the IBM Z family of servers. It provides an air-cooled single frame with a low entry cost that can easily coexist with other platforms in a data center. The IBM DS8910F provides the same enterprise capabilities offered in the larger IBM DS8900F systems at a lower cost and can be integrated in the same frame of the IBM z15 Model T02, delivering a powerful and affordable end-to-end solution in a single 19-inch industry-standard rack. This unique IBM integration is the result of years of research and collaboration between the IBM server and storage teams, working together to provide unmatched business value:

◉ Improve the customer experience with ultra-low application response times. IBM zHyperLink technology accelerates the access to data, allowing the processing of huge volumes of IBM Z transactions faster, while unlocking the unmatched value for mission critical workloads with storage latencies as low as 18 microseconds. This represents a 10x improvement compared to high-performance FICON (zHPF). In addition, zHyperLink can be integrated with IBM HyperSwap to meet the needs of high performance in high-availability environments.

◉ Maximize the uptime of IBM Z enterprise servers with “seven 9’s” availability and multi-site replication for disaster recovery. HyperSwap technology provides automatic failover with no-data-loss capabilities within metropolitan distances, and at more than 1,000 miles, IBM Geographically Dispersed Parallel Sysplex (GDPS®) enables fast recovery of business operations with 2 to 4 seconds RPO and less than 60 seconds RTO.

IBM Exam Prep, IBM Tutorial and Material, IBM Career, IBM Guides, IBM Learning, IBM Preparation

◉ Significantly reduce the adverse impact of cyberattacks and other data corruption incidents. IBM Z® Cyber Vault combines analytical, diagnostics and recovery tools with IBM DS8000 Safeguarded Copy and services to deliver a unique end-to-end data resilience solution that no one else can provide in the industry. Specialized software is enabled to constantly monitor data, detect corruption and isolate data that is compromised. Continuous protection of clean point-in-time copies are stored in a logical partition that is isolated from the production environment, providing the ability to quickly recover from logical corruption events with confidence using this air-gapped trusted source of data.

◉ Reduce CAPEX and OPEX by consolidating transactional, cloud-native and business intelligence workloads in a single rack. This is especially important for smaller customers in co-location data centers as they explicitly pay for each tile of floorspace that they are using.

Flexibility, responsiveness and cost reduction drive the development of innovative solutions that accelerate digital transformation. By integrating the IBM z15 Model T02 with IBM DS8910F in a single frame, organizations now can get to the market faster and quickly adapt to changing situations, while avoiding security risks and more complex migration challenges.

Source: ibm.com