Saturday, 29 February 2020

IBM

Accessibility Strengthens the User Research Practice

IBM Prep, IBM Tutorial and Material, IBM Certification, IBM Guides, IBM Learning
Inclusive design is the buzz term around the design community. Inclusive design can be interpreted in many different ways and have numerous outcomes. When narrowing the scope to user research, though, how does inclusive design make an impact?

As a user researcher, it is critical to talk with people using various offerings to understand their pain points and uncover areas of opportunity. This seems like a simple task but requires a ton of organization and preparation to run a successful ‘study’. The most important piece of this preparation work is finding individuals with diverse abilities to interview and with whom to test. The simple act of communicating with someone with a different set of skills or abilities can surface major accessibility concerns that should be addressed within the beginning design phase of a project.

The insights captured by talking with a diverse group of participants are what create inclusive design within the user research practice.

Uncovering User Insights


Uncovering accessibility insights from a diverse group of end users can create several opportunities for the entire offering team. The first opportunity is a perspective shift. End users all bring different backgrounds and challenges to the table. Inviting participants with a broad range of experiences allows the insights from the research to have more of an impact when implemented into the product. The impact from the varied perspectives adds more sides to an initial issue with a product and more guidelines to creatively problem solve for the user. The more thoughts and observations gathered, the better understanding the offering team will have when tackling what needs to be changed or added to the product.

Resource Saving


IBM Prep, IBM Tutorial and Material, IBM Certification, IBM Guides, IBM Learning
The second opportunity is resource saving. When including individuals with diverse abilities in the upfront research stage, the researcher is far more likely to catch accessibility needs. By communicating the accessibility concerns of a user to the offering team, the team can save time by implementing fixes into their current cycle to reduce overwhelming amounts of work later on. Accessibility insights can easily be translated into design prototypes through the checking of color contrast, creating ARIA tab labels, and flow redesign (just to name a few examples) before developers code the design. Fixes of that nature done up front save time and resources compared to accessibility fixes at the end of a development cycle. By moving the needle left on when accessibility is implemented, teams can save time by not redoing completed work and accomplish more by giving back the saved time to employees so they can tackle backlog cycles.

Expanded Reach


The third opportunity is to have world class products reach more people. IBM creates incredible products that benefit everyone. Wide perspectives, abilities, and skill sets should be introduced at the beginning of the product life-cycle to ensure inclusive design.

Friday, 28 February 2020

Cloud innovation in real estate: Apleona and IBM rely on new technologies

Digitization does not stop at the proverbial concrete gold — real estate. In fact, the real estate industry is on the move. Companies are realizing the benefits of digital transformation and are capitalizing on the power of new technologies such as cloud, AI and blockchain.

Take, for example, Apleona GmbH, one of Europe’s largest real estate service providers. The German company has more than €2 billion in sales and manages properties in all asset classes in more than 30 countries. Apleona recognizes that it can broaden the established facility management model by strategically partnering with its corporate clients to transform and digitize their offerings. To this end, Apleona has been working with IBM since 2017 in a digital partnership that is continually expanding. The company has already implemented a range of innovative ideas, including the following new applications:

Room Booking is the easy and fast way to book office conference rooms without spending too much time on mobile apps. The mobile terminal’s visual representation can also display which rooms are occupied or vacant. As a result, you can quickly find available spaces in the building.

◉ With Smart Ticketing, office workers can use a mobile app to create a “ticket” in just a few clicks if they notice a problem such as defective devices in the office. With this simple and straightforward solution, employees can track the ticket’s processing through the app. IBM Watson AI Services even allows them to “spot” problems so they can automatically create a categorized ticket.

◉ Finally, Energy Pods are convenient mini-booths for “power naps” in the office, optionally accompanied by music or guided meditations. Employees can recharge in these stylish pods and even book them with a mobile phone. Four of the units are in use and are very popular at the IBM headquarters; more are planned at other locations.

IBM Industry, IBM Tutorial and Material, IBM Exam Prep, IBM Certification

New technologies and agile methods are key to innovative solutions


Apleona team members and experienced IBM IT consultants co-created the new solutions in the IBM Garage in Munich, Germany. Most of the applications are hosted on IBM Cloud in the Frankfurt data center, but the customer-oriented flexibility of the IBM hybrid multicloud platform makes the applications available across platforms.

The Apleona and IBM team capitalized on existing real estate and facility management data in inventive ways to develop new offerings. To create smart environments that optimally adapt to people’s needs, the team used data analytics, cognitive automation, edge computing and AI. These technologies, in connection with sensor technology and the cloud, made it possible to process and transfer data streams.

The forward-thinking and agile principles of the IBM Garage Methodology also helped Apleona succeed in its digital transformation. Apleona and IBM continue to follow this methodology at the IBM Watson IoT Center, which serves as the team’s co-creation space. This location even boasts the Room Booking solution.

Expanding offerings and scaling solutions


While the real estate industry has recognized the general benefits of digitization, critical voices are still asking for detailed business cases and quantifiable cost savings. How do you quickly scale the new solutions? “In our industry, innovation and monetization must always be thought of together,” explains Dr. Jochen Keysberg, Chief Executive Officer at Apleona. “That’s why it’s so important that new ideas, such as those developed with IBM, always have competitive advantages, efficiency improvements and better resource utilization in mind.”

IBM Industry, IBM Tutorial and Material, IBM Exam Prep, IBM Certification

The real estate industry cannot shy away from digitization, and like Apleona, other facility managers will have to adopt modern solutions to succeed in the marketplace. “Simply start, develop agile and at the same time, bring the required expertise to the table. Then quickly validate and test it with the addressed users. In our experience, it usually works in a quite straightforward way,” explains Stefan Lutz, General Manager and Services Transformation Program Leader at IBM.

Indeed, new customers, including a major German bank and global companies in the technology, energy and automation industries, are already relying on the innovation of Apleona and IBM. In addition, other IBM locations will soon use Apleona’s cutting-edge solutions, and more applications will roll out in 2020. The joint team is excited to see the offerings expand and the solutions scale.

Source: ibm.com

Thursday, 27 February 2020

What’s behind data preparation for AI?

IBM AI, IBM Tutorial and Material, IBM Certification, IBM Guides, IBM Exam Prep

Data is a fundamental element of AI. If you have no data, you won’t get any meaningful insights from the AI techniques out there. Luckily, we live in a world that generates tons of data every day, and storage has become so affordable that we can now put zillions of bytes of data to use with AI, right? Well, not quite. Having lots of data isn’t enough to help you play well in the AI game. You have to know how to work with it.

Consider this example: Imagine your company is doing market research for a global client in the consumer goods industry. The client wants to use AI to analyze trends in consumer behavior, and it has sales data on its products from supermarkets in many countries. However, you quickly run into a challenge: one product — say a detergent — can have a different brand name in almost every country. How can your AI model provide meaningful insights in such a situation?

This problem is just one example of a scenario in which data is not ready to be used, even though you have it. There are many other issues that might arise. For example:

◉ Data sources might be different; for instance, one market chain stores data in CSV format while others do it in Excel spreadsheets

◉ Different constants could mean the same thing; for instance, some sources could use M / F for gender while others use Male / Female

◉ Some sources might be missing some information that others collect; for instance, state and city

So, all of these are details you have to attend to when dealing with data. That’s why data preparation is so important before you can begin to analyze it through AI.

Data preparation


It’s known that 80 percent of the time of a data science project lifecycle is spent on data preparation. This is because a data scientist needs to clean the data before it’s used in an AI model.

The data preparation process may include: filling in missing values (but with what? a default value? something else?), removing duplicate entries, standardizing attributes (gender or product names, for instance), eventually masking sensitive data if required by law, and more. Additionally, deciding which part of the data should be used for training a model, which for testing, which for validating is also important (this is called data partitioning). Otherwise, your model will suffer from some issues during its training phase, as we’ll discuss in future blog posts.

IBM AI, IBM Tutorial and Material, IBM Certification, IBM Guides, IBM Exam Prep

Figure 1: The lifecycle of a data science project

Data scientists report that the task of data preparation is often tedious and error prone. It is, however, the most important step for ensuring more accurate insights from AI. If you teach children a false answer in school, they’ll give a false answer when they encounter a similar problem in real life. AI follows the same rationale. If an algorithm learns a wrong answer from incorrect, imprecise data, its insights will not be useful, and might even point in the wrong direction.

If I were to give you one hint about the AI game, it is to invest in data preparation!

Tools that can help


So, you might be wondering, are there tools to help data scientists prepare data? Of course! Tooling exists to help with performing data transformation, deduplication, filtering, aggregation, partitioning, visualization and more. IBM Watson Studio Local and IBM Cloud Pak for Data include tools to aid with data preparation. For example, an IBM POWER9 cluster can perform data preparation with Watson Studio Local, as well as training and inferencing with Watson Machine Learning Accelerator.

Whichever tools you decide to use, have some goals in mind:

◉ Automation of data preparation: Can the tool handle all required data operations?

◉ Ability to work with raw data that has never been formatted

◉ Scalability: Data sets can be very large, so your tool needs to handle that, especially in today’s world of a zillion bytes of data

◉ Collaboration across business units, because you don’t want to create more data silos

◉ Ease of use: How easily can this tool perform the operations you need and connect to the data sources you have?

◉ Use of new data sources for more current data to be prepared and used, thus allowing AI models to identify trending changes more quickly

◉ Interoperability with cloud solutions

Data preparation is by no means simple, and there are many more nuances to it than I described in this blog post. I hope, though, that this has helped you understand how important data preparation for AI is. In my next post, I’ll talk about model training.

Source: ibm.com

Wednesday, 26 February 2020

Why IBM Power Systems means business for SAP

IBM Tutorial and Material, IBM Prep, IBM Learning, IBM Guides, IBM Certifications

As SAP customers consider the hybrid cloud model for their mission-critical applications, they need flexibility and choice in deployments. IBM Power Systems is known for its flexibility, powered by the largest SAP HANA virtualized server scalability. Today, we are further extending this flexibility with the announcement of SAP HANA Enterprise Cloud on IBM Power Systems.

To enable this, SAP will place IBM POWER9 servers in SAP HANA Enterprise Cloud (SAP HEC) data centers to give their clients the ability to run their workloads wherever they want across the private or hybrid cloud model. The availability of IBM Power Systems in a SAP HANA Enterprise Cloud environment is a great testimony of the value that IBM Power Systems brings to SAP clients on their digital transformation journey.

Following last year’s announcement that IBM POWER9-based systems are available in the IBM Cloud, this continues our effort to allow clients to tap into IBM Power Systems technology to handle the workloads of their choice in the deployment model of their choice. With IBM Power Systems made available in public cloud environments, customers can deploy mission-critical applications in a hybrid cloud model. This gets them the best of both an on-premises and a cloud-based deployment approach.

SAP is playing an important role in enabling clients on their enterprise cloud journey. The SAP HANA Enterprise Cloud (HEC) is a fully scalable, secure service designed to accelerate cloud readiness. The offering provides capabilities that span the entire software and hardware stack, and it provides the level of control clients expect on premises in one privately managed environment. SAP HEC on Power Systems will offer the enhanced scalability and availability of mission-critical SAP applications by allowing users to scale up their HANA applications in a private, managed POWER9 instance.

For more information about IBM Power Systems for SAP HANA, please visit us here.

Source: ibm.com

Tuesday, 25 February 2020

To cloud or not to cloud . . . that shouldn’t be the question

IBM Exam Prep, IBM Prep, IBM Tutorial and Material, IBM Certification, IBM Learning

n my discussions with clients about their journey to cloud, it’s becoming evident that many of them are viewing cloud as a goal, instead of looking at cloud deployments as a capability. In some cases, line-of-business owners are under corporate pressures to “adopt” a cloud-first strategy.

A recent Forrester Consulting study commissioned by IBM, “The Key To Enterprise Hybrid Multicloud Strategy” suggests that “75% [of 350 surveyed global decision makers] have received pushback while advocating for strategies outside of cloud environments.”  The result of this, unfortunately, is the lack of continued investment on their on-premises (“on-prem”) environments.

From my experience, this type of deployment focus may not yield the expected results. We shouldn’t think about problem solving as “To cloud or not to cloud?” Instead, we should ask ourselves, “What is the problem I’m trying to solve?” and “Are cloud deployments (public or private) going to optimize my solution?”

One of the key recommendations in the Forrester Consulting study is to “invest in cloud using a strategy that aligns to your context.” I couldn’t agree with this more. So, let me offer an example of context:

Problem: My mission-critical workloads need a disaster-recovery solution to ensure I can get access to my data in a reliable manner, should the need arise. Proverbially, I don’t want to put all my eggs in one basket. I also don’t have the capacity to keep a second copy of all this data on premises.

Solution: Disaster recovery is one use-case that could warrant a good use for public cloud. In such a scenario, data residing primarily on premises could be tiered to the cloud for backup storage, with near real-time mirroring to the on-premises environment to help provide data consistency.

Hybrid workflow: To design and optimize such a solution, you need software which has been designed for a hybrid use case. Designed for means the software provides the same, consistent management interface you’re used to running in your existing on-premises environment and in the public cloud.  Furthermore, this allows you to have a cohesive view of your solution, regardless of the deployment model.

Defining chapter 2 in the journey to cloud


Forrester’s research clearly determines that “organizations that can bring together on-premises with public cloud strategically will be best positioned for operational excellence.” At IBM, we refer to this more complex combination of on premises and cloud as Chapter 2. In order to understand this better, let’s look at the components of the journey:

Born on the cloud – These are cloud-natively built applications and services that use cloud infrastructure tools and microservices to run the application.

Lift and shift – Applications which were originally designed and written to run on premises are now ported to run on cloud infrastructure. In many situations, while this may get the “ticky box” of cloud deployment, you may not see the true operational and scale value of this approach.

Refactored (modernized)– Applications which were originally designed for on-premises consumption and rewritten using cloud microservices architectures or were perhaps located in a containerized environment. Your app may not be born on the cloud, but it’ll act like it was!

Regardless of where you are on your journey to cloud, I’d encourage keeping an eye on your on-premises infrastructure. And by journey, I mean crawl (lift and shift), walk (modernize), and run (hybrid workflows).

Be prudent in your on-premises investment, with an eye towards hybrid!


Nurturing your on-premises environment is critical for long-term sustainable strategy, but it also protects your prior investments. The smarter way to think about cloud is to focus optimization of your environment aligned with your specific use cases. Future-proof your on-premises investments by ensuring they are designed with a hybrid workflow in mind. Does that mean all workflows are hybrid? Not at all. Remember, context (also known as use case) is key!

Here at IBM Storage, we recognize the challenges you face and have equipped the IBM FlashSystem family with software that provides you with an onramp to the cloud. A great example of this is IBM Spectrum Virtualize, an award-winning software foundation that provides simplified storage across heterogeneous environments. With Spectrum Virtualize on premises and IBM Spectrum Virtualize for Public Cloud, you can have a single cloud solution for your storage needs, rather than a siloed approach with different solutions for different vendors.

Furthermore, the beauty of it is that you can further optimize your hybrid workflow by leveraging software like AI-infused IBM Storage Insights, to monitor your data, regardless of where your data resides, on premises or in the cloud!

Monday, 24 February 2020

Key security considerations for Linux on IBM Power Systems

IBM Prep, IBM Study Materials, IBM Guides, IBM Tutorial and Material, IBM Exam Prep, IBM Power System

One of the greatest challenges in the computer industry is reducing the risk of cyber attack.  Cyber criminals are constantly developing new methods to infiltrate and attack organizations by circumventing computer security. Key to reducing cybersecurity risk is utilizing a “defense in depth,” or multilayered, approach. IBM Power Systems and the POWER9 processor are designed to facilitate this security approach by providing different layers of security protection, including security for the hardware, operating system, firmware, hypervisor and security tooling, like IBM PowerSC.

In this post, we’ll focus on some key security recommendations related to Linux on IBM Power Systems for you to consider utilizing as you move to IBM POWER9.

1. OS boot security improvements on OpenPOWER systems


Systems can be easily configured to boot a compromised OS kernel if no measures are taken to ensure kernels’ integrity. Two new firmware features are being developed that are designed to improve the security of booting non-virtualized operating systems on OpenPOWER hardware.

The first is called secure boot. Secure boot — or verified boot — checks that OS kernels are valid before allowing them to boot. OS providers supply OS kernels that they sign cryptographically.  When system administrators install OS kernels, they also install corresponding kernel verification keys into protected system flash storage. Before the bootloader boots a selected kernel, it uses the one of the verification keys to check the kernel against the original kernel signature. The bootloader boots the kernel only if the check succeeds, thus preventing unvetted kernels or modified kernel images from booting.

The second we call trusted boot. Trusted boot securely stores a cryptographic hash of a kernel image before it boots, which provides an indelible record of precisely which kernel booted for future assessment. The bootloader takes a cryptographic hash of the kernel image, records it in an event log, and uses it to update the state of a register in the Trusted Platform Module (TPM) called a Platform Configuration Register.

A prominent use case for trusted boot is called remote attestation. After a system boots, a second system can check which kernel has booted on the first system by requesting its event log and TPM-signed Platform Configuration Register set. The second system can then use this data to appraise the first system’s state before it continues interacting.

Firmware secure boot and trusted boot are already enabled on IBM POWER9 systems. We anticipate that OpenPOWER OS secure boot and trusted boot will become available for select Power Systems in a future firmware update.

2. Cybersecurity profiles available with PowerSC graphical user interface


IBM PowerSC is a suite of cybersecurity tooling for IBM Power Systems. The “S” in PowerSC stands for “security,” and the “C” stands for “compliance.” There are several different tools in this suite. One tool, the PowerSC graphical user interface, provides a web browser-based interface that provides centralized security configuration, management, monitoring and reporting information. It provides the ability to deploy a security profile, which consists of a set of operating system security settings, to multiple systems from the web browser-based centralized management server.

For Linux on Power endpoints (excluding those running in big endian mode on Red Hat Enterprise Linux [RHEL] 7), PowerSC can provide security hardening for SUSE Linux Enterprise Server (SLES) 12 SP3 and Red Hat Enterprise Linux Server 7.4. PowerSC currently provides Linux security hardening profile support for your PCI-DSS and GDPR compliance obligations. The goal of these two profiles is to help clients address the subset of their compliance requirements that relate to operating system security hardening. It’s also possible to create customized profiles that include any portion or combination of these two profiles.

3. SLES Security Hardening Guide for the SAP HANA® Platform


If you run the SAP HANA platform for Linux on Power, you should be aware of system hardening advice from SUSE. SUSE provides the “Operating System Security Hardening Guide for SAP HANA,” which offers recommendations for operating system security hardening measures for SLES 11 when running SAP HANA on an SLES host, as well as the “Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise 12,” which is updated for SLES 12 hosts. Both of these guides provide numerous recommendations for improving the security of your operating system environments when specifically running SAP HANA on SLES. Additionally, SUSE has also recently released a hardening guide version for SLES 15. It’s our opinion that most of the recommendations in these guides can also be utilized to reduce cybersecurity risk when running SAP HANA on Red Hat Enterprise Linux.

4. Cryptographic enhancements


Poorly performing cryptography can be an impediment to the protection of sensitive data, both in transit and at rest. OpenSSL in RHEL 8, SLES 15 and Ubuntu 19.04 utilizes Vector Multimedia Extension (VMX) cryptographic support instructions available with POWER9. This new support is designed for better performance of cryptographic operations for the following algorithms:

◉ AES

◉ ChaCha20

◉ ECC Curve NISTZ256

◉ ECC Curve X25519

◉ GHASH

◉ Poly1305

◉ SHA2

◉ SHA3

Also, the Linux kernel contains VMX acceleration for these algorithms:

◉ AES

◉ GHASH

And, Golang incorporates VMX acceleration for these algorithms:

◉ AES

◉ MD5

◉ SHA-2

◉ ChaCha20

◉ Poly1305

Similar updates are currently in progress for additional upstream projects including libgcrypt, NSS and GnuTLS, and IBM continues working to improve the performance of cryptography on POWER9 in these and other open source projects.

Defend yourself against cyberattack


An organization can never reduce its cybersecurity risk to zero. Reducing cybersecurity risk is a never-ending process of adapting your security measures to a constantly changing cybersecurity landscape. However, thoughtfully and carefully implementing a defense in depth cybersecurity strategy might be the difference in preventing your organization from experiencing a cybersecurity breach. This post has recommended four layers of security that can help you take meaningful steps towards achieving a robust defense in depth cybersecurity implementation.

IBM Systems Lab Services provides a Linux Security Assessment for SAP HANA. This consulting service is the first step in realizing what it takes to implement a defense in depth cybersecurity implementation for Linux systems. Lab Services also provides a PowerSC proof of concept service that assists organizations with installation, configuration and administrative best practices when using PowerSC with AIX, Linux systems or IBM i.

Friday, 21 February 2020

Innovate with Enterprise Design Thinking in the IBM Garage

IBM Garage, IBM Prep, IBM Exam Prep, IBM Tutorial and Material, IBM Guides

We’ve all been there. You have an amazing idea that’s really exciting. Maybe it’s a home improvement project, or perhaps it’s a new business idea. You think about all the details required to make it real. But, once you get to the seventh action item, you’re not so excited anymore.

Sometimes when we realize the weight of the effort needed for our big ideas, it can crush our original excitement and momentum.

This is the crux of many failed initiatives.

So how can you move forward?

How to apply Enterprise Design Thinking and Lean Startup


Enterprise Design Thinking enables teams to think beyond what they consider possible and find truly innovative ideas. It’s about thinking big.

Lean Startup and a minimum viable product (MVP) are about thinking in small steps. What’s the smallest thing you can build efficiently to learn more about your biggest risk?

Combining the “bigness” of Design Thinking and the “smallness” of Lean Startup propels teams towards real solutions, but it can also trip up many teams. If you’re too focused on MVPs, you won’t come up with big ideas. If you’re too focused on big ideas, keeping an MVP to something that’s truly minimum is very challenging.

The key is that you can’t treat these as two separate exercises. They must be integrated seamlessly into one process. This lets teams think big but act small.

How IBM Garage Design Thinking Workshops help guide the journey


At the IBM Garage, our experts guide clients on their journey starting with a crisp definition of the opportunity they want to tackle. We then assemble a diverse group of stakeholders and users and bring them together for a two-day IBM Garage Design Thinking workshop.

Enterprise Design Thinking: Think big. Once assembled, it’s time to think big. In typical Enterprise Design Thinking style, we unpack the opportunity to find the part of the problem we want to tackle first — the part that once solved will have the most impact on the users and thus the business. Then we use the diversity in the room to find an array of innovative solutions to the problem, generally exploring more than 100 ideas before we focus in on the one with just the right balance between do-ability and awesomeness.

That right balance is different in every case, which is why having the right team of stakeholders and IBM Garage experts assembled is crucial. Technology is evolving so quickly that any one person’s notion of what ideas are and are not feasible is probably wrong. You need the team to be willing to proceed with the right idea, even if that idea initially looks risky.

Lean Startup: Find the approach. Day 1 of an IBM Garage Design Thinking workshop is about using Enterprise Design Thinking to think big. Day 2 is about applying a Lean Startup approach to drive that big idea to the right MVP.

First, we look at the vision and ask: “Are you confident enough in every aspect of this vision to be willing to jump in completely and invest whatever it takes to build it?”

If we really thought big on Day 1, the answer will almost always be, “no”.

Next, we explore all aspects of the vision that are holding the team back. For example, do they worry the market isn’t ready for the idea? Will the company legal team approve the project? Can we design something simple enough to allow the idea to reach the right audience?

Now, focusing on the biggest risk that the team wants to learn more about, we define a testable hypothesis, and identify the smallest thing needed to be able to test that hypothesis.

How to test the MVP solution


Some hypotheses can be tested without any coding, and if that’s the right MVP, of course, we do that. But the Garage has a bias toward building production pilots — we believe the best way to learn is by putting something real in the hands of real users.

Figuring out how to get something valuable into user’s hands in, typically, six to eight weeks requires as much creative thinking as identifying the big idea. This is why the IBM Garage views Enterprise Design Thinking and Lean Startup as two parts of a single method, not two separate phases of a project.

Client case study example: Mueller, Inc.


Let’s look at a real client example, Mueller Inc, a manufacturer of steel buildings and components.

On day one of the Garage Workshop, we arrived at a vision. The team wanted to build a mobile ordering tool to empower contractors to make accurate materials quotes and place an order, all while on the job at a building site. The vision was straightforward, but it was a huge, innovative step for their business.

We knew building the app was possible, but there was some cost-prohibitive data normalization and integration required to make it happen.

In defining the MVP, the team made the critical decision of restricting the scope of the application to only those parts needed for a single type of project. This allowed the team to limit the amount of data normalization needed and get something useful into production.

Within two days of going live, Mueller was transacting real sales through the app.

The MVP provided value to real customers by enabling them to complete order requests faster and proved that such a solution had market value. Plus, the MVP app gave the Mueller team a better understanding of how to normalize their data. All of that in about eight weeks. The perfect MVP.

That’s the power of combining Enterprise Design Thinking with Lean Startup. That’s what the IBM Garage can do for you.

Source: ibm.com

Thursday, 20 February 2020

Driving innovation for connected cars using IBM Cloud

IBM Cloud, IBM Study Material, IBM Guides, IBM Tutorial and Material

Cars have always been built for travel, but the experience of driving has changed dramatically over the last few decades. Today’s connected cars are not only equipped with seamless internet, but usually have a wireless local area network (WLAN) that allows the car to access data, send data and communicate with Internet of Things (IoT) technology to improve safety and efficiency and optimize performance. The car’s connectivity with the outside world provides a superior in-car experience with on-the-go access to all the features one might have at home.

Traditionally, the networks supporting this robust connectivity, unlike cars, have not been built for travel. Data is stored in a home network in a local data center, which causes latency issues as the car travels and massive amounts of data are transferred across borders. In addition, privacy legislation, like the General Data Protection Regulation (GDPR), limit the transfer of personal data outside the EU, which not only creates a poor user experience on the road, but can impact safety-related IoT insights.

We at Travelping saw an opportunity to use cloud-native technologies for networking to help the automotive industry negotiate the challenges of cross-border data management regulations and improve latency issues for auto manufacturers looking to gain real-time IoT insights.

Road less traveled is most efficient


IBM Cloud, IBM Study Material, IBM Guides, IBM Tutorial and Material
Travelping develops carrier-grade cloud-native network functions (CNFs) that are used to design software-defined networking solutions. Using IBM Cloud infrastructure products and IBM Cloud Kubernetes Service, we created a cloud-native solution that transports data directly to the vehicles, eliminating latency issues while fulfilling requirements for GDPR. We had strict technical requirements for our IT infrastructure and chose IBM Cloud for several reasons. IBM has a global footprint, which was key for us to provide networking capabilities in the cloud and better manage compliance with GDPR and European Data Security Operation laws, which was not possible on other clouds. Many clouds in the field are what we call north-south clouds. They terminate web traffic. Our solution forwards the traffic for our mobile users — what we call east-west traffic. IBM Cloud is the only one that still allows us to transport data from node to node in a network, and not just terminate it.

For us, one of the biggest advantages in choosing IBM Cloud, in addition to all the automation and speed, is that as a team of 30 people, we can deliver globally on a cloud platform that is deployed globally. And we don’t need to invest a penny for that; we can utilize computer resources that are virtually everywhere.

Software-defined networking is a radical change in the way networking is approached today as it brings the entire software development ecosystem close to the network, allowing operators to integrate all the network resources into the application domain. We moved to IBM Cloud Kubernetes and container deployment because you get an environment where you can run services that are rather simple in a five-nine — 99.999 percent service availability — environment. And it’s a five-nine environment that you get mostly for free, by following Kubernetes or cloud-native principles. With Kubernetes, there’s a common API. It works on private cloud and private deployment, but it also works in public clouds. You are totally agnostic, from developer notebook to private cloud deployments to edge deployments. You deploy in exactly the same way again and again. And this is only possible with Kubernetes.

Promise of 5G


For our industry, there’s a promise of 5G, and that cannot be fulfilled by the carrier alone anymore. There needs to be trust between operators and cloud providers to deliver a distributed infrastructure. Operators trust software vendors like us to create services for them. The whole 5G promise needs to be on more shoulders than it is at the at the moment, so that’s a little bit of a paradigm shift. It’s the first time in the mobile industry that we have had this shift. We need to create another infrastructure for communications services in the field, and that needs to be distributed; the cloud is the foundation for that. You don’t need to mount telecommunications equipment in owned data centers anymore because 90 percent of the spec is available in the cloud. You can book resources wherever you want to go. And this is a huge advantage — global carriers or local carriers can act globally and fulfill local regulations. A company from Germany can deploy in South Korea, as we have done on IBM Cloud. This was not possible in the past, but it’s possible today with cloud resources. In our experience, especially in Europe, IBM plays a role because it is a trusted partner of big customers, and therefore the entry was relatively easy for us.

Wednesday, 19 February 2020

Accelerate AI workloads with IBM Storage

Artificial Intelligence, IBM Systems Lab Services, IBM Storage, IBM Tutorial and Material, IBM Prep, IBM Exam Prep

Artificial intelligence is on the rise.

Gartner’s 2019 CIO Agenda survey shows that between 2018 and 2019, organizations that have deployed AI grew from 4 percent to 14 percent. This increasing adoption demonstrates the value of the data insights and resulting gains to the business that AI can offer.

AI has come a long way since its earlier applications in nuclear and genomic sciences. Today, it can be used in numerous industry use cases, including:

◉ Marketing: Retargeting a client or providing product recommendations to an existing client

◉ Sales: Predictive sales patterns, lead generation, chat bots, mail bots as a sales representative

◉ Operations: Predictive maintenance, manufacturing analytics

◉ Customer service: Call analytics, response suggestions, automated claims processing

◉ Data security and personal security: Cybersecurity and fraud detection, AI-powered autonomous security systems

◉ Healthcare: Early diagnosis, image insights

◉ Automotive: IoT and GPU advancements that make self-driving cars possible

The earlier you jump on the AI strategy train, the more competitive and innovative your business can be.

As you’re planning for AI adoption, it’s important to note that infrastructure, specifically storage, can be a key accelerator in any AI setup. Every AI or machine learning use case depends on data and the inferences we draw from that data. This data, as part of the AI process, goes through various stages in the AI workflow, and each stage has its own distinct I/O characteristics. Having a storage solution that not only supports these diverse I/O requirements but also provides easy data management is essential.

IBM Storage solutions stand out in addressing the kinds of challenges that can surface in an AI workflow, helping you balance cost and performance as you introduce more AI capabilities to your business.

Storage for every stage of the AI workflow


The typical AI workflow has four major processes:

◉ Ingest

◉ Classify/prepare/transform

◉ Analyze/train

◉ Inference/insights

Each stage requires different storage capabilities to achieve the desired results. Essential capabilities for a successful AI implementation include scalability, speed, cost and flexibility.

Ingest: Data ingestion is the process of obtaining and importing data for immediate use from anywhere. It can be challenging for businesses to ingest data at a reasonable speed. For successful data ingest, you need a storage solution capable of providing the required scalability, throughput and ability to ingest data from heterogeneous sources — and at an affordable cost.

Classify/prepare/transform: Post ingest, an automated classification, preparation and transformation (if any) can reduce the complexity of managing the data received from heterogeneous sources. One way of doing this could be to have a policy-based solution to help in using the same data for multiple AI use cases.

Analyze/train: Data analysis and training is a highly iterative process, and each iteration, depending on the data size, may take a long time to complete. A storage solution with low latency can help you complete an AI training cycle in less time.

Inference/insights: The final step is getting valuable insights out of the data, which can help a business in achieving the desired competitive advantage. Speed therefore is again an important aspect of the storage solution.

How does IBM Storage for AI stand apart?


IBM offers a wide portfolio of software defined storage solutions to address requirements at each stage of the AI workflow. The available solutions not only provide an intelligent information architecture but also enable data management in an efficient manner to accelerate the data pipeline from the ingest stage to insight.

Artificial Intelligence, IBM Systems Lab Services, IBM Storage, IBM Tutorial and Material, IBM Prep, IBM Exam Prep

Figure 1: IBM Storage portfolio designed and optimized to serve the unique requirements of different stages—from ingest to insights and beyond.

IBM Spectrum Scale

IBM Spectrum Scale is a proven software-defined offering that’s POSIX compliant, provides parallel access to data and helps minimize data movement across different stages of AI. Spectrum Scale provides multiple ways to access the data as a single global name space and hence is useful during different data cycles, where data ingest happens from heterogenous sources globally. Spectrum Scale moves data hassle-free and as quickly as possible across tiers, with extreme low latency and high throughput. It is also available as an appliance, IBM Elastic Storage Server, providing tremendous throughput from just a single unit.

IBM Cloud Object Storage

Large data sets for AI require storing a massive amount of data for a longer duration. IBM Cloud Object Storage provides a cost effective, scalable, secure, highly available and easily manageable solution. It is available both as a software solution and as appliances. The appliance is available in smaller configurations starting from terabytes and scaling to exabytes.

IBM Spectrum Discover

IBM Spectrum Discover is a modern metadata management software that helps to bring structure to unstructured data. Spectrum Discover helps speed up the data preparation stage with cataloging and classification of unstructured data and removing duplicated data during the classification and transformation process of the AI workflow.

IBM FlashSystem

Data analysis and training of AI systems is a process, and at this stage, faster storage is the key to quick results. IBM FlashSystem is developed and built up with IBM FlashCore patented technology to provide excellent performance and a highly competitive total cost of ownership.

IBM Storage solutions for AI support AI frameworks like TensorFlow, Caffe, Spark and CNTK. Organizations interested in adopting AI need to work with providers that offer a wide range of storage solutions to support every stage of AI process — and IBM Storage can do that.

Tuesday, 18 February 2020

Top 5 DevOps predictions for 2020

There are five DevOps trends that I believe will leave a mark in 2020. I’ll walk you through all five, plus some recommended next steps to take full advantage of these trends.

In 2019, Accenture’s disruptability index discovered that at least two-thirds of large organizations are facing high levels of industry disruption. Driven by pervasive technology change, organizations pursued new and more agile business models and new opportunities. Organizations that delivered applications and services faster were able to react more swiftly to those market changes and were better equipped to disrupt, rather than becoming disrupted. A study by the DevOps Research and Assessment Group (DORA) shows that the best-performing teams deliver applications 46 times more frequently than the lowest performing teams. That means delivering value to customers every hour, rather than monthly or quarterly.

2020 will be the year of delivering software at speed and with high quality, but the big change will be the focus on strong DevOps governance. The desire to take a DevOps approach is the new normal. We are entering a new chapter that calls for DevOps scalability, for better ways to manage multiple tools and platforms, and for tighter IT alignment to the business. DevOps culture and tools are critical, but without strong governance, you can’t scale. To succeed, the business needs must be the driver. The end state, after all, is one where increased IT agility enables maximum business agility. To improve trust across formerly disconnected teams, common visibility and insights into the end-to-end pipeline will be needed by all DevOps stakeholders, including the business.

DevOps trends in 2020


What will be the enablers and catalysts in 2020 driving DevOps agility?

Prediction 1: DevOps champions will enable business innovation at scale. From leaders to practitioners, DevOps champions will coexist and share desires, concerns and requirements. This collaboration will include the following:

◉ A desire to speed the flow of software

◉ Concerns about the quality of releases, release management, and how quality activities impact the delivery lifecycle and customer expectations

◉ Continual optimization of the delivery process, including visualization and governance requirements

Prediction 2: More fragmentation of DevOps toolchains will motivate organizations to turn to value streams. 2020 will be the year of more DevOps for X, DevOps for Kubernetes, DevOps for clouds, DevOps for mobile, DevOps for databases, DevOps for SAP, etc. In the coming year, expect to see DevOps for anything involved in the production and delivery of software updates, application modernization, service delivery and integration. Developers, platform owners and site reliability engineers (SREs) will be given more control and visibility over the architectural and infrastructural components of the lifecycle. Governance will be established, and the growing set of stakeholders will get a positive return from having access and visibility to the delivery pipeline.

IBM Study Materials, IBM Prep, IBM Certifications, IBM Exam Study, IBM Guides

Figure 1: UrbanCode Velocity and its Value Stream Management screen enable full DevOps governance.

Prediction 3: Tekton will have a significant impact on cloud-native continuous delivery. Tekton is a set of shared open-source components for building continuous integration and continuous delivery systems. What if you were able to build, test and deploy apps to Kubernetes using an open source, vendor-neutral, Kubernetes-native framework? That’s the Tekton promise, under a framework of composable, declarative, reproducible and cloud-native principles. Tekton has a bright future now that it is strongly embraced by a large community of users along with organizations like Google, CloudBees, Red Hat and IBM.

Prediction 4: DevOps accelerators will make DevOps kings. In the search for holistic optimization, organizations will move from providing integrations, and move to creating sets of “best practices in a box.” These will deliver what is needed for systems to talk fluidly, but also remain auditable for compliance. These assets will become easier to discover, adopt and customize. Test assets that have been traditionally developed and maintained by software and system integrators will be provided by ambitious user communities, vendors, service providers, regulatory services and domain specialists.

Prediction 5: Artificial intelligence (AI) and machine learning in DevOps will go from marketing to reality. Tech giants, such as Google and IBM, will continue researching how to bring the benefits of DevOps to quantum computing, blockchain, AI, bots, 5G and edge technologies. They will also continue to look at how technologies can be used within the activities of continuous deployment, continuous software testing prediction, performance testing, and other parts of the DevOps pipeline. DevOps solutions will be able to detect, highlight, or act independently when opportunities for improvement or risk mitigation surface, from the moment an idea becomes a story until a story becomes a solution in the hands of their users.

Next steps


Companies embracing DevOps will need to carefully evaluate their current internal landscape, then prioritize next steps for DevOps success.

First, identify a DevOps champion to lead the efforts, beginning with automation. Establishing an automated and controlled path to production is the starting point for many DevOps transformations and one where leaders can show ROI clearly.

Then, the focus should turn toward scaling best practices across the enterprise and introducing governance and optimization. This includes reducing waste, optimizing flow and shifting security, quality and governance to the left. It also means increasing the frequency of complex releases by simplifying, digitizing and streamlining execution.

IBM Study Materials, IBM Prep, IBM Certifications, IBM Exam Study, IBM Guides
Figure 2: Scaling best practices across the enterprise.

These are big steps, so accelerate your DevOps journey by aligning with a long-term vision vendor that has a reputation of helping organizations navigate transformational journeys successfully. OVUM and Forrester have identified organizations that can help support your modernization in the following OVUM report, OVUM webinar and Forrester report.

Monday, 17 February 2020

QAD turns cost centers into profit centers with IBM Cloud

IBM Cloud, IBM Tutorial and Material, IBM Certification, IBM Prep, IBM Exam Prep

Cloud has been an important part of our strategy at QAD for well over a decade. In fact, among the established global manufacturing enterprise resource planning (ERP) and supply chain software providers, QAD was one of the first to offer cloud-based solutions, starting with QAD Supplier Portal in 2003 and then ERP in 2007.

Despite our early start in the cloud, it took several years for cloud ERP to reach critical mass in general and in particular the manufacturing markets, which are QAD’s focus. IT departments, comfortable with running their own data centers, were cynical about cloud’s ability to deliver the right level of availability, scalability, security and performance.

Driving manufacturing cloud ERP adoption


There were two key drivers behind increased cloud ERP adoption. First, the recession in the late 2000s forced IT departments to move away from being cost centers and toward being profit centers. For example, IT departments found ways to increase speed to market, added efficiencies to processes and provided decision-makers with useful analytics. This segued into the second driver of the move to the cloud: removing the burden of running ERP and allowing more time for business differentiation. Moving to QAD Cloud not only provides support 365 days a year, but also delivers 99.987 percent application uptime, disaster recovery and a complete suite of Defense in Depth Security.

IBM Cloud, IBM Tutorial and Material, IBM Certification, IBM Prep, IBM Exam Prep
Another milestone in QAD’s evolution toward the cloud came in 2015, when we unveiled an initiative, called Channel Islands, to begin rearchitecting our applications and the underlying platform for the cloud era, for Industry 4.0 and for smart manufacturing. We also started to take better advantage of emerging technologies.

Moving to a global cloud provider


These drivers have sustained cloud growth of roughly 30 percent year over year for several years for QAD. While we were already working with a few cloud providers, it became clear to us that we needed another cloud provider that was strong outside of North America. IBM, which had acquired SoftLayer a few years earlier, and which had an excellent reputation for cloud management, was operating IBM Cloud in places like Australia. We investigated further and found that IBM had international cloud facilities in several key regions that matched well to our expanding customer base, including Paris, Singapore and Hong Kong.

IBM was also a front-runner vendor based on its system availability run rate. Its data centers are at the high-tier classification and are designed to provide the highest level of availability and security that manufacturers need. These factors, coupled with its crisp and consistent execution, made IBM the obvious choice.

Maximizing IBM Cloud collaboration opportunities


IBM’s delivery of service has continuously met our needs. We have service level agreements (SLAs) with IBM, and we extend those SLAs to our customers. Our track record working with IBM has been excellent. The company understands that the job is not done by simply delivering to current SLAs. We have weekly operational calls with IBM to align our business and track the KPIs that drive our excellent service to our customers. We also collaborate at monthly strategic meetings to discuss short-term technology roadmaps such as improvements to our VMware deployments to ensure we maintain the highest availability. Finally, IBM’s recent acquisition of Red Hat cements our relationship even further since Red Hat Enterprise Linux is the primary OS used for QAD Adaptive ERP in the cloud.

Speaking of collaboration, I was recently invited to join the an IBM customer advisory board. In that role, I’ll be providing feedback as the voice of the customer and the voice of the customer’s customer. With this kind of input, IBM can continue to provide technology that supports the next generation of ERP and supply chain solutions. IBM has had a really open mind in terms of working from the outside in when it’s developing on its cloud technology roadmap.

What is the number one benefit to working with IBM? Understanding that the highest availability is assumed, and that teamwork and collaboration produce results that exceed the SLA in terms of service delivery. IBM provides enterprise class service delivery to us and we extend that to our customers.  Whenever we pick up the phone and ask for guidance or support, it’s always there. I cannot think of a single time when we needed IBM and IBM wasn’t responsive.

Sunday, 16 February 2020

RPA and Intelligent Automation: Why establish an Automation Center of Excellence?

IBM Study Materials, IBM Prep, IBM Exam Prep, IBM Learning, IBM Guides

Through my various roles at IBM, I have had the pleasure of meeting a wide variety of Nordic clients on a regular basis. Common for all of them = On a quest to add value in an ever changing digital era.

Value is determined by assessing current performance, together with the future potential.

That means executives are on a never ending journey to operate efficiently in the present, while building the future and a better version of their organizations.

Thriving in the digital era has become imperative to succeed in both disciplines.

So how do companies thrive in a changing digital landscape?


In order to operate efficiently and build the future generations of themselves, the majority of the companies I meet, are leveraging a colorful palette of usual-suspect-technologies like AI, Cloud, Blockchain, Robotic Process Automation and chat bots. This, of course coupled with a mature digital core on the back of for e.g. the ERP system (This core is as many readers may know, not always geared for the 2020 version of digital transformation – but that is a different blog post).

Common for these usual-suspect-technologies is the aim to intelligently reinvent the way processes and people work, with the goal of delivering value to customers and shareholder.

IBM Study Materials, IBM Prep, IBM Exam Prep, IBM Learning, IBM Guides

A core part of what we at IBM do is exactly this; designing, implementing and managing intelligent and automated processes to increase business value.

In Denmark for example, we are working with the retailer Salling leveraging Robotic Process Automation (watch this video), the financial service provider Industriens Pension, leveraging AI and conversational agents and Fødevarestyrelsen, a public institution, leveraging intelligent workflows/business process management.

All cases are examples of using intelligent automation to make processes and people smarter and faster.

The key to success for these initiatives is to start small, create a measurable business value in production, but from the get-go have an understanding of the fact that at a given time, scalability, infrastructure, processes and organisation will become key enables of continued success and return on investment.

This is one of the reasons why we at IBM work with many clients on establishing (or re-launching) their Automation Center of Excellence.

Why are companies implementing an Automation Center of Excellence (CoE)?


First of all, it is impossible to perfectly integrate People, Processes and Technology in complete harmony.

It just will never happen, because of inherent struggles like organisational politics, the speed of which technology moves and the need for stability in core business processes and applications.

So stop aiming for the perfect state. Period.

The purpose of the Automation CoE is to get clients as close to perfect as possible by executing on the automation/digital strategy. It is the intersection of People, Processes and Technologies that make intelligent automation real.

The CoE defines the governance structure and helps identify and prioritise the pipeline. It furthermore provides the skills and technology to design, develop and manage the automations. Throughout the cycle, the CoE guides technology selection, resource management and benefits realisation at scale.

How does a successful CoE look?


The correct (and unredeemed) answer is: “It depends”.

The CoE can have different operating models dependant on the organisation in which it sits. This, among other things, comes down to whether the CoE should be federated (decentralised), centralised or a hybrid between the two. Furthermore, it is CRITICAL, that the CoE is connecting stakeholders from Executive Management, Business and IT. This ensures that all parties work towards a shared strategic business goal. The mutual and long term success of automation at scale depends on (near) frictionless cooperation.

To support this, HfS Research in Autumn 2019 invited IBM and 32 other automation industry experts to discuss the rules making Automation efforts a success.

Guess what? “Collaboration between IT and business” came out as Rule #1 and as Rule #3 the expert panel highlighted that “The automation strategy has to clearly connect with an overarching business strategy”.

The CoE facilitates exactly this, so why not get started right away?

IBM Study Materials, IBM Prep, IBM Exam Prep, IBM Learning, IBM Guides
IBM’s model for CoE competency areas. Source: IBM

Saturday, 15 February 2020

Storage made simple for hybrid multicloud: the new IBM FlashSystem family

IBM Tutorial and Material, IBM Certifications, IBM Prep, IBM Exam Prep, IBM Learning

In part one of this blog post series, we discussed IBM’s approach for delivering innovation while simplifying your storage infrastructure, reducing complexity, and cutting costs. Now let’s take a closer look at the details of the new IBM FlashSystem family, a single platform designed to simplify your storage infrastructure, reduce complexity and cut costs, while continuing to deliver extensive innovation for your enterprise class storage solutions and your hybrid multicloud environments.

Technology innovation


IBM Tutorial and Material, IBM Certifications, IBM Prep, IBM Exam Prep, IBM Learning
IBM FlashCore Modules (FCMs) deliver both data compression and FIPS 140-2 certified data-at-rest encryption with NO performance penalty.

Space is always at a premium in data centers and cloud configurations. IBM is introducing a new FlashCore Module with 38.4TB usable capacity, twice the size of the previous largest module. Using this new FlashCore Module, IBM FlashSystem supports a maximum of 4PB of data in only 2 rack units (2U) of space for extraordinary density and efficiency. These new FlashCore Modules are also available as upgrades for IBM’s previous flash array generation: FlashSystem 9100 and Storwize V5100 and V7000 Gen3.

Storage-class memory


To provide ultra-low latency for performance sensitive but less cache-friendly workloads, IBM is also introducing storage-class memory (SCM) drives from Intel and Samsung as a persistent storage tier for IBM FlashSystem family. Like the new FlashCore Modules, SCM drives are also available as upgrades for our previous generation of all flash arrays: FlashSystem 9100 and Storwize V5100 and V7000 Gen3.

Effortless management


With flexible drive options including Storage Class Memory (SCM), all-flash and hybrid configurations, the IBM FlashSystem family can help you to craft the optimal balance between cost and performance. And through its IBM Spectrum Virtualize software foundation, IBM FlashSystem can integrate storage from over 500 other heterogeneous storage systems.

IBM Tutorial and Material, IBM Certifications, IBM Prep, IBM Exam Prep, IBM Learning
To optimize both your system performance and deliver cost efficiency, IBM FlashSystem includes AI-driven IBM EasyTier capability that helps place the right data on the right tier at the right time. The system takes care of transparently moving the hottest blocks of data up the tiers while the cooler blocks of data are moved down. There’s nothing for you to do except set it and forget it. With EasyTier, you can configure storage environments with a small amount of the highest performing tiers and a larger quantity of more economical storage.

IBM Storage Insights provides monitoring, AI-based alerts, reporting and support capabilities from IBM Cloud. Storage Insights Pro simplifies storage further with support from a single management pane for cloud storage managed by IBM Spectrum Virtualize for Public Cloud and EMC Unity and Unity XT, NetApp FAS and AFF, and Hitachi VSP G-series storage.

“IBM’s FlashSystem announcement is a transformational change for the storage market,” says Bob Elliot, Vice President, Storage Sales at IBM Business Partner Mainline Information Systems. “By combining multiple award-winning solutions into one family, IBM is not only protecting existing customers’ investments, they are also providing strong innovation into the future. FlashSystem creates a highly scalable, incredibly adaptable, flexible and seamless solution from entry level to high end, extending to our clients’ hybrid multicloud environments. And since it is predominantly a channel-based family, it is extremely well suited to the partner environment. I think it’s a win-win for both customers and partners.”

Innovative new storage systems


Built with award-winning IBM Spectrum Virtualize software and including IBM Storage Insights, the new IBM FlashSystem family adds four new enterprise-class systems that deliver the benefits of IBM FlashCore Modules, storage-class memory, and the enterprise-class data services of IBM Spectrum Virtualize, while seamlessly connecting and managing data in your hybrid multicloud deployments.

When application growth calls for more storage capacity:

IBM Tutorial and Material, IBM Certifications, IBM Prep, IBM Exam Prep, IBM Learning
◉ IBM FlashSystem 7200: End-to-end NVMe and sophisticated enterprise-class hybrid multicloud functionality in a system designed for mid-range enterprise deployments. Supporting both scale-up with expansion enclosures and scale-out with up to 4-way clustering, FlashSystem 7200 supports 43 percent higher performance than Storwize V7000 Gen3 with a maximum of 9.2M IOPS and 55 percent better throughput: a maximum of 128GB/s. All this delivered at a lower list price compared to our previous generation.

IBM Tutorial and Material, IBM Certifications, IBM Prep, IBM Exam Prep, IBM Learning
◉ IBM FlashSystem 9200: End-to-end NVMe in a system designed for the most demanding enterprise requirements. FlashSystem 9200 delivers comprehensive storage functionality and our highest levels of performance: 20 percent better performance than FlashSystem 9100 with maximum 18M IOPS and 180GB/s per 4-way cluster. Both FlashSystem 7200 and 9200 also support only 70μs latency. All this delivered at a lower list price compared to our previous generation.

◉ IBM FlashSystem 9200R: Designed for clients needing an IBM-built, IBM-tested complete storage system delivered assembled, with installation and configuration completed by IBM. The system includes 2-4 IBM FlashSystem 9200 control enclosures, Brocade or Cisco switches for clustering interconnect and optional expansion enclosures for additional capacity.

These systems are joined in the family by entry enterprise FlashSystem 5010, 5030, and 5100. All members of the IBM FlashSystem family, except the all-flash FlashSystem 9200 and 9200R, are available in both all-flash and hybrid flash models.

IBM Tutorial and Material, IBM Certifications, IBM Prep, IBM Exam Prep, IBM Learning
When modernizing heterogeneous storage, the trail-blazing IBM SAN Volume Controller (SVC) provides virtual storage consolidation of over 500 heterogeneous storage systems for simplicity, modernization, hybrid cloud capability and more. SVC uses the same IBM Spectrum Virtualize software and platform as IBM FlashSystem but supports only external storage. Two models are available, one delivering a maximum of 40 percent more IOPS than our previous system at the same list price. The other, designed for more price conscious deployments, offers slightly better performance10 than our previous model but at a 25 percent lower list price.

“IBM’s recent announcement adds new enterprise class features in a storage solution that spans our workload needs from bare metal to virtualized, from container to hybrid multicloud,” says Dave Anderson-Ward, Server and Storage Technical Team Lead at the UK Ordnance Survey. “We’re looking forward for the new end-to-end NVMe generation of the FlashSystem family that is engineered to deliver lower latency and twice the capacity in only 2RU. IBM’s Flashcore Module technology provides us the leading-edge features—such as compression and data at rest encryption with no performance penalty—that we need to keep our data center and cloud environments ahead of our competitors.”

IBM Tutorial and Material, IBM Certifications, IBM Prep, IBM Exam Prep, IBM Learning
Together these IBM FlashSystem models deliver a single enterprise-class platform for simplicity and consistency across all deployments from the smallest entry configurations to the largest high-end enterprise to hybrid multicloud. They are designed to work across the servers you have today and are ready for containerized deployments using Red Hat OpenShift, Kubernetes, and CRI-O through their support for Container Storage Interface (CSI).

With IBM FlashSystem 5100, 7200, 9200, and 9200R, IBM delivers across the range of deployments end-to-end NVMe storage with the efficiency and cyber resiliency benefits of IBM FlashCore Modules and the ultra-low latency of storage-class memory, all backed by IBM Storage Insights management and streamlined support.

If you are considering new storage or are simply trying to modernize the storage you already have, you’ll find the new IBM FlashSystem family the right choice for your next storage acquisition.