Saturday 29 August 2020

Innovation that helps to accelerate the world of business

IBM Exam Prep, IBMi, IBM Certification, IBM Learning, IBM Tutorial and Material

With over 3,000 patents issued or in process, IBM z15 represents the result of 4 years of technological development. This week, IBM Z is announcing enhancements on cloud native development and deployment, encryption everywhere, cyber resilience and flexible compute. This powerful current of innovation is consistently followed by IBM DS8900F and IBM TS7770 and is reflected in the new enhancements within both storage families announced today.

“The basic mainframe approach to data processing fits exceptionally well with modern workloads such as artificial intelligence and real-time analytics,” notes Mark Tansey, Storage Solutions Sales Director for long-time IBM Business Partner Vicom Infinity. “We see a strong future for mainframe deployment and sales. This means that we must provide our clients with complementary data storage solutions that offer equal levels of performance, reliability, and security. IBM DS8900F and TS7700 VTL systems fill the bill perfectly.”

Built with advanced storage technologies, the IBM DS8900F family benefits from years of research and deep collaboration between the IBM Storage and IBM Z teams to deliver business value for mainframe deployments. Announced today, the memory cache in the IBM DS8950F storage solution is increased by 70 percent to a whopping 3.4 terabytes.

This enhancement enables users to consolidate more IBM Z and IBM LinuxONE mission-critical workloads into a single storage solution. For example, the consolidation of 8 IBM DS8870 systems into a single IBM DS8950F solution can potentially reduce CAPEX by 35 percent and OPEX over a 3-year period by 91 percent while continuing to meet performance service-level agreements (SLAs).

IBM Exam Prep, IBMi, IBM Certification, IBM Learning, IBM Tutorial and Material

In addition, IBM continues to provide high levels of synergy with mainframe by enabling the integration of IBM DS8910F within the IBM z15 model T02 and LinuxONE III model LT2 solutions, announced on April 14.  This mainframe-storage integration can help to maximize the value of your mainframe environments in a single 19-inch industry standard rack.

Along with increases in performance and efficiency within the DS8900F family itself, today IBM Storage is also announcing multiple enhancements to the way IBM DS8900F and TS7770 systems work together with the goal of accelerating mainframe-driven enterprises. Transparent cloud tiering (TCT) seamlessly automates the movement of data to and from the cloud and offloads much of the management workload to the DS8900F and TS7770 storage systems, reducing mainframe CPU utilization by 50 percent for these tasks.

Additionally, the IBM DS8900F has added the capability to compress data prior to transfer across TCP/IP connections to TS7770 systems configured as object storage targets. This new feature offers multiple potential benefits:

◉ The compression engine — and TCT itself — does not affect overall system IOPS
◉ No additional servers or gateways are needed
◉ Less network bandwidth is required to move data, helping to increase performance across multicloud environments
◉ Three times more data can be stored at the target system, potentially reducing CAPEX by 55 percent and OPEX over 3-year period by 44 percent

Data security is another area of enhancement within the Storage for IBM Z ecosystem. By adding SP 800-131A compliant encryption, in-flight encryption can be extended between DS8900F arrays and all individual VTL systems in a TS7770 grid. With no need to configure key groups, key managers, or other configurable items, encryption is now even easier within your mission-critical hybrid multicloud environments. Also, IBM TS7770 solutions now offer dual control security authentication with a “maker” and “checker” approach designed for multi-tenancy environments.

IBM Exam Prep, IBMi, IBM Certification, IBM Learning, IBM Tutorial and Material
The Data Facility Storage Management Subsystem (DFSMS ™) leverages TCT to create full volume backups to the cloud which can be restored to any DS8900F system. This dovetails nicely with new cloud-based disaster recovery capabilities for IBM VTL and tape solutions. Data sets can now be restored to any empty TS7770 system outside TS7770 grids using Cloud Connect technology.  As volumes are created in a grid, TS7770 Cloud Connect copies them to the assigned cloud pools, where they can be managed by DFSMS. Version retention is enabled within each cloud pool, allowing previous versions to be retained long enough to meet any requirement. These new capabilities are supported in IBM Cloud, AWS S3, IBM Cloud Object Storage on premises, and RStor.

IBM DS8900F and TS7770 are not the only IBM Storage families that continue to deliver extensive innovation. Today we are announcing SAS interface and Ethernet enhancements to IBM TS1160 tape drives, plus new support of TS1160 drives by IBM TS4500 tape libraries.

“The deep integration between IBM DS8900F, IBM TS7770 and IBM Z offers a number of advantages to mainframe users,” states David Hill, Senior Analyst at Mesabi Group, “but perhaps the most important benefit is simplification. IBM has done a good job in providing a comprehensive portfolio of storage for IBM Z solutions to address every enterprise data storage requirement and meet the demands of a modern, agile and secure to the core infrastructure. This is important to customers since it decreases complexity, reduces costs and they tend to receive better support.”

These new announcements are a sign of constant technological innovation to further strengthen the mission-critical capabilities of the IBM DS8900F and IBM TS7700, enabling the storage foundation for your mission-critical multicloud environments.

Source: ibm.com

Thursday 27 August 2020

IBM

Your journey to AIOps includes IBM Z

IBM Exam Prep, IBM Tutorial and Material, IBM Cert Exam, IBM Learning, IBM Guides, IBM Certification

The challenges of a digital business


At the heart of today’s digital business is the quality of the experience it delivers to customers, partners and employees.  But today, delivering the level of service your customers have come to expect is more complicated. In the midst of rising costs, skills shortages and ever-growing security threats, you also have to adapt quickly to shifts in demand patterns brought on by an all-digital workforce and rapidly changing buyer behavior. And that requires putting extra emphasis on the resiliency and performance of your business processes and supporting applications.

For larger IT organizations with increasingly hybrid and complex application landscapes that often include IBM Z, it’s essential to take a comprehensive approach to IT operations. The challenge becomes: How do you effectively sift through terabytes of data in real time to identify an issue before it becomes an outage? That’s why organizations are drawn to the promise of AIOps to leverage AI-driven intelligence and automation to make quick and accurate decisions to maintain resiliency.

A holistic approach to AIOps that includes IBM Z


You cannot successfully adopt AIOps in pieces or silos. It requires a holistic approach focused on business processes and workflows. The resiliency of a workflow depends on the health of every link in the chain. To succeed, you need visibility and insight throughout the entirety of the workflow.

Once you understand that you must apply AIOps holistically to reach its full potential, it becomes clear why leaving the mainframe out of the equation creates a significant gap.

“The mainframe must be central to an AIOps adoption effort not because it is somehow more important than the rest of the stack, but simply because it is an essential element of most business-critical workflows.”

Achieving a holistic approach to AIOps requires intelligent tools and processes that provide hybrid cloud visibility, leverage AI and machine learning in a simple, explainable way, and automate timely actions to avoid customer impact.

Accelerate your journey to AIOps


Over the years, IBM has worked with hundreds of organizations to help them mature how they run their data centers. To make the lessons learned from these client interactions more consumable, IBM has produced a framework that can be used as an aid to accelerate your journey to AIOps. This is a pragmatic framework, which is intended to incite a fact-based discussion around where you are and where it makes sense to go based on your business drivers and your pain points. Let’s have a look.

The four stages in the journey to AIOps are:

◉ Firefighting: The IT organization isn’t prepared for an issue and has to throw a lot of highly skilled people at the problem to resolve it. In this stage, consumers experience outages for long periods, and the mean time between failures (MTBF) is very low.

◉ Reactive: The IT organization knows about the potential issues and has procedures in place to resolve the issue. In this stage, consumers experience outages, but the outages are shorter resulting from a shorter resolution time.

◉ Proactive: Teams leverage tools to proactively look for issues or anomalies in the system and organize failure situations to test for response time. In this stage, consumers experience a low number of outages, and the mean time between failures is also high.

◉ Intelligent: Teams apply a more pervasive adoption of AI that is fully explainable.  We apply machine learning to identify non-trivial anomalies, find trends, forecast problems, and remediate them before they become a service disruption.

To help you accelerate through the stages of AIOps, you need to integrate a broad set of practices and capabilities. We have divided these practices into three areas.

◉ Detect: Identify potential issues as soon as possible, ideally before they disrupt your business. To accomplish this, we need to focus on three areas: monitor your complete infrastructure and end-to-end application performance, generate alerts for incidents, and apply analytics for early detection of anomalies.

◉ Decide: Rapidly isolate the problem, do root cause analysis, and decide on the right actions. Numerous practices and technologies are used to reach this goal, including artificial intelligence to aid in the analysis and decision making, and ChatOps to collaborate across frequently siloed teams or team members.

◉ Act: Apply automation to enable teams to respond rapidly and preempt disruptions. This includes automating runbooks, increasing the level of automation so systems can reduce the need for manual intervention while taking self-correcting actions for more and more issues, and delivering an integrated orchestration and automation solution across our hybrid cloud infrastructure.

The below illustration provides a summary of how businesses evolve as they go through their journey.

IBM Exam Prep, IBM Tutorial and Material, IBM Cert Exam, IBM Learning, IBM Guides, IBM Certification

Assess where you are with AIOps


The journey to AIOps is incremental, and each customer will take a slightly different path. Based on our work with many customers, we have captured a set of best practices that can help accelerate that journey. By organizing these practices into a well-defined descriptive framework, we aid a meaningful, fact-based discussion that can help your organization assess where you are on this journey and determine a plan for where you want to go.

Monday 24 August 2020

Inventory management and your omnichannel retail dream

IBM Exam Prep, IBM Tutorial and Material, IBM Learning, IBM Certification, IBM Prep

Omnichannel retail, omnichannel commerce – it’s all the same. Providing your customers with a consistent, seamless experience and allowing them to move between channels to order, edit, collect or return is now standard.

With separate systems across organizations, how do you merge everything so orders, inventory and fulfillment options are shared? This exact challenge created the traditional order management system. Today, most of the largest retailers are fully omnichannel. They treat their warehouse and store inventory as one. Orders can be placed and managed across all channels. And, fulfillment options are only limited by your imagination (BOPIS, SFS, curbside pickup, etc.).

Getting started


If you are not one of these early adopters of omnichannel retail, you’re probably wondering where to start. Combining multiple and often disparate systems can be difficult and many internally-designed systems weren’t built for omnichannel options. That means they can be difficult and costly to upkeep as more fulfillment options are demanded by customers, and perhaps already provided by your competitors.

Inventory is the first step for all omnichannel retailers. You can’t make delivery commitments to your customers if you don’t know where all your inventory lives. Let’s walk through a few initial steps that will guide you in your journey to omnichannel inventory visibility.

Step 1: Determine where your inventory exists across your network and across systems that are tracking that inventory. This gives you the scope of potential issues and effort, plus the cost that exists in your organization.

Step 2: Ask yourself – do you build or buy? Some companies feel they have more control over their order management system if they build it in-house. This way, they control all aspects of the system and have complete flexibility. However, other companies prefer to focus on innovating and delivering great customer experiences and don’t want to tie up resources building such a tool. In these cases, they prefer to purchase an off-the-shelf product.

One thing to keep in mind is how much customization you’ll need. You may have relatively simple plans initially – but think long-term and be able to expand your fulfillment options as new ideas are developed.

Step 3: Decide if you are going to update your entire order management system or start with getting an accurate real-time view of your inventory. Figuring this out early on is critical to your success. If your company is ready to jump all in and update your order management system, you are set. However, if your company wants to take a more conservative approach, you could start with just implementing an inventory visibility solution so you have real-time, accurate inventory data across all channels.

Step 4: Build a business case to justify the investment. To do this, read this recent report from Forrester. Additionally, familiarize yourself with the benefits that other retailers have achieved with an intelligent fulfillment platform:

◉ Eileen Fisher increased accuracy and trust in inventory data, helping associates serve customers faster.
◉ Fossil can now reallocate stock in a dynamic way when inventory for an in-demand product is running low on one of its channels.
◉ REI uses artificial intelligence to factor in various goals throughout the year, such as product margin, shipping speed and fulfillment costs, and matches that to its inventory in its three distribution centers and 155 stores.

Step 5: Start assessing different vendors to see which is the best fit for your needs. This is a very important point in the journey, as the choice you make in this step will most likely be part of your company for the next 10+ years.

If you’ve made it past step 5, that is when the hard work really begins. Now is the time to maximize your efforts and test, test and test again in order to optimize and glean value from your inventory management journey.

Soure: ibm.com

Friday 21 August 2020

IBM Power Systems Announces POWER10 Processor

IBM Exam Study, IBM Learning, IBM Tutorial and Material, IBM Certification, IBM Exam Prep

The accelerating shift to hybrid cloud models worldwide requires new tools to provide greater flexibility, efficiency and security across the systems enterprises use every day. Today, IBM announced the POWER10 processor at the Hot Chips 2020 conference, bringing an innovative set of capabilities to address these needs.

The IBM POWER10 processor underscores IBM’s belief in the fourth platform of IT: hybrid cloud. With hardware co-optimized for Red Hat software, IBM POWER10-based servers will deliver the future of the hybrid cloud when they become available in the second half of 2021.

The IBM POWER10 processor is equipped with enhancements to meet enterprise demands around capacity, security, energy efficiency, elasticity and scalability. In addition, the POWER10 processor can integrate AI into enterprise business applications to drive the future of enterprise computing.

POWER10 processor innovations


Some of the new innovations of the POWER10 processor include:

◉ IBM’s first commercialized 7nm processor, expected to deliver up to a 3x improvement in capacity and processor energy efficiency within the same power envelope as IBM POWER9, allowing for greater performance.

◉ Support for multi-petabyte memory clusters with a breakthrough new technology called memory inception, designed to improve cloud capacity and economics for memory-intensive workloads from ISVs like SAP, the SAS Institute and others as well as large-model AI inference.

◉ New hardware-enabled security capabilities including transparent memory encryption designed to support end-to-end security. The IBM POWER10 processor is engineered to achieve significantly faster encryption performance with quadruple the number of AES encryption engines. In comparison to IBM POWER9, POWER10 is updated for today’s most demanding standards and anticipated future cryptographic standards like post-quantum and fully homomorphic encryption, and brings new enhancements to container security.

◉ New processor core architectures in the IBM POWER10 processor with an embedded matrix math accelerator which is extrapolated to provide 10x, 15x, and 20x faster AI inference for FP32, BFloat16, and INT8 calculations, respectively, per socket than the IBM POWER9 processor to infuse AI into business applications and drive greater insights.

Designed over five years with hundreds of new and pending patents, the IBM POWER10 processor is an important evolution in IBM’s roadmap for POWER. Systems taking advantage of IBM POWER10 are expected to be available in the second half of 2021.

Samsung will manufacture the IBM POWER10 processor, combining Samsung’s industry-leading semiconductor manufacturing with IBM’s CPU designs.

Tuesday 18 August 2020

IBM Cognos Controller Developer Role: Job Responsibilities and Description

IBM Cognos Controller Developer, Cognos Controller Developer, IBM Cognos Controller Developer Exam, IBM Cognos Controller Developer Certification
IBM Cognos Controller Developer maintains the close, consolidation, and reporting process with the activity and affordability of a cloud-based solution. It enables finance teams to automate and accelerate the financial close with minimal IT support. It also helps finance teams deliver business results, create informative financial and management reports, and provide the chief financial officer (CFO) with an enterprise view of key financial ratios and metrics.

The IBM Cognos Controller Developer is responsible for setting up a Controller application by building account and company structures and to set up the consolidation methods such as currency conversion, intercompany transactions, and investments in subsidiaries. The developer must also be prepared to design and generate financial reports used for economic analysis. This individual will be able to play as an active team member on implementation projects.

IBM Cognos Controller Developer needs neither great development resources nor tedious, costly programming. Run directly by finance, and it allows users to make their amendments to entities, account details, and organizational structures as needs arise.

Deploy Cognos Controller Developer on Cloud Today

By doing so, you will be ready to lower costs and accelerate time to value within a short time. After deploying on the cloud, you can add users as required while minimizing capital-related expenses. Also, you can get a secured uptime and keep your data secure. So, invest in Cognos Controller today for better financial management tomorrow.

IBM Cognos Controller Developer is a complete platform for financial and management reporting. It comes with both power and flexibility to guarantee streamlined, best-practice fiscal consolidation and reporting. Its full suite of skills put power in finance stakeholders, managers, line-of-business executives, and regulatory bodies. You can consider it the de facto starting point for planning, forecasting, budgeting, and other processes. The on-cloud deployment option will help your organization meet individual IT requirements without worrying about scalability.

The IBM Cognos Controller Developer is accountable for the design, development, and deployment of multi-dimensional reports using the Cognos Business Intelligence solution.

Additional responsibilities include:
  • Creating, improving, and supporting Framework Manager Models to support business requirements.
  • Working with business users to create and develop Cognos BI 10.1 shots using Report Studio with data drill-down and slice-and-dice options.
  • Develop and implement test plans to guarantee the successful delivery of a project.

IBM Cognos Controller Developer Responsibilities and Duties

Overall responsible for leading the business planning solution design, build, and deployment on TM1 Lead iterative configuring and testing of Budgeting tool and guarantee the successful global implementation of the tool solution Ability to design and develop TM1 tool solution Ability to evaluate the client’s current capabilities and give perspective on how to modify the solution to best practices Ability to interpret, understand and design TM1 control objects, cubes, cube views, rules, attributes, dimensions, subsets, processes, and chores. Help client IT to set-up environment and interfaces. Build data flow and interaction points on the tool.
Develop governance and security matrix in the solution Skills:
  • At least 8+ years’ experience on TM1 Must be a subject matter expert in financial planning solutions. Must have hands-on expertise in designing, customizing TM1 for defined terms experience supporting environment
  • Develop and perform tests on different enterprises and provide dashboard solutions with the help of Cognos software and assist all end users.
  • Coordinate with data architecture and modeling teams to design databases and prepare reports for the same.
  • Analyze all projects and prepare corporate plans for the same and provide support to all Cognos applications.
  • Prepare all reports for administration with the aid of Cognos Report Studio.
  • Design all needed excel and PowerPoint templates and prepared all metadata models in coordination with the framework manager.
  • Investigate and resolve all data issues and analyze all business obligations and conduct interviews with all stakeholders.
  • Coordinate with all customers, analyze test queries, analyze client issues, and assist all application analysts in doing all workflow requirements into efficient data points.
  • Prepare all designs and codes for various Cognos reporting objects and ensure compliance with all the best business systems in the industry.
  • Schedule and ensure compliance to all design code and test activities, prepare plans for the same, design various dashboards, and maintain the required structure for all processes.
  • Monitor work, ensure optimal utilization of all SQL scripts, document all database elements, analyze data anomalies, and prepare reports for the cycle.
  • Prepare all documents for reporting objects and monitor and resolve all desk ticket issues and perform troubleshoot on all Cognos objects to prepare reports and ensure agreement to all schedules.
  • Perform troubleshoot on all Cognos helpdesk tickets, resolve all queries for content, and prepare standard reports and ensure accuracy.
  • Provide support to all Cognos issues and maintain knowledge on all new techniques and prepare reports for all warehouse functions.

Summary

IBM Cognos Controller Developer is a comprehensive, web-based solution that allows power and adaptability for streamlined, best-practice financial reporting and consolidation. It provides built-in financial intelligence and advanced analytics to provide timely, accurate information for enhanced decision support and regulatory compliance.

Thursday 13 August 2020

How predictive maintenance improves efficiencies across five industries

IBM Exa Prep, IBM Certification, IBM Certification, IBM Learning

New technologies—including the rise of the Internet of Things (IoT)—and market pressures to reduce costs are pushing companies to move from reactive, condition-based maintenance and analytics to predictive maintenance. MarketsandMarkets forecasts the global predictive maintenance market size to grow from USD $3.0 billion in 2019 to USD $10.7 billion by 2024.

Predictive maintenance is generally thought to be most applicable to the manufacturing industry. While manufacturing certainly benefits from proactive maintenance, which encompasses predictive and preventative efforts, predictive maintenance can be applied to and benefit a wide range of industry sectors.

Predictive maintenance: Five client case studies from five industries


IBM is helping companies across industries apply predictive maintenance to improve business performance. Below are five IBM client examples demonstrating how predictive maintenance in the cloud is helping businesses from five different industries excel.

Waste management


Government of Jersey cleans waste management. The Government of Jersey is moving from reactive to proactive maintenance to better serve the approximately 100,000 residents of Jersey, the largest of the Channel Islands located off the coast of France. Maintenance had previously been done largely reactively and documentation sometimes took hours to find. Now, the Government of Jersey solid waste department is deploying solutions from IBM Business Partners Ennovia and Crazylog on the IBM Cloud to address these challenges. The CrazyLog Quickbrain solution provides modules for maintenance management, including preventative maintenance scheduling, inventory management and a record of reactive maintenance. The government-run waste department now has greater visibility into its equipment and can more easily access and find relevant information from its 5,000 pieces of documentation.

Manufacturing


EcoPlant helps Israeli food companies improve efficiencies. Air compression systems are used by the food and beverage sector to package food, cut and shape food products and clean machinery. However, they’re also quite expensive to run, using as much as 30 percent of plant electricity, according to the US Department of Energy (DOE). Israeli startup EcoPlant is changing the landscape by helping the food and agricultural manufacturing plants cut energy use, reduce costs and improve maintenance and visibility, all with predictive maintenance on IBM Cloud.

Building services


KONE keeps elevators running smoothly. KONE is in the business of keeping people in motion. Traditionally, elevators and escalators have been maintained on a calendar basis or when a problem occurred, but KONE recently launched its 24/7 Connected Services offering on IBM Cloud to provide predictive maintenance for its elevators. The Connected Services offering uses IBM Watson IoT and analytics to help reduce equipment downtime, minimize faults and provide more detailed information about equipment performance and usage.

Renewable energy


Performance for Assets increases windfarm efficiencies and output. Wind energy is on the rise globally, according to data from Wind Energy International, but windfarm owners have typically had limited or no insight into the condition of their machines. To address this gap, Performance for Assets (P4A) teamed with the IBM Garage to develop an advanced monitoring system for wind turbines in the IBM Cloud. Their solution is designed to help windfarm owners gain insights that’ll help them maintain wind turbines, thereby increasing energy output and profits.

Mining


Sandvik Mining and Rock Technology improves mining output and safety. Sandvik Mining and Rock Technology is bringing advanced predictive analytics the mining industry. A common industry challenge is maintaining equipment; without properly functioning machinery, mining operations will slow drastically or cease altogether. Sandvik worked with IBM to enhance Optimine, its information and process management solution. Running on IBM Cloud, the solution uses IBM Watson IoT and IBM Maximo Asset Management to analyze vast amounts of data and predict maintenance needs. Now, mining operators can better act on insights to improve production efficiency.

Source: ibm.com

Monday 10 August 2020

IBM Z — the digital reinvention continues

IBM Exam Prep, IBM Tutorial and Material, IBM Certifications, IBM Z

In today’s digital world, you must strike a balance between technical and business needs — addressing service delivery, availability, flexibility, skills and time to market — while optimizing your digital transformation to hybrid cloud. The platform underpinning your hybrid cloud strategy must be reliable, flexible and agile to meet your current business needs while helping you to prepare with confidence for the future.

Today, we’re making a series of exciting announcements to help our clients become even more flexible, fast and secure. We are extending the capabilities of IBM z15™ and LinuxONE III across cloud native development, data protection, flexible configuration and resiliency.

Accelerate your journey to cloud 


We’ve made a variety of enhancements enabling our clients to have a common developer experience on IBM Z and LinuxONE, including:

◉ Red Hat OpenShift Container Platform is generally available for IBM Z and IBM LinuxONE and recently Red Hat released OpenShift 4.5 on the platform. This brings together the cloud-native world of containers and Kubernetes with the security, scalability and reliability features of IBM enterprise servers.

◉ Cloud Pak for Applications 4.2: The latest cloud-native application development tools and languages are available on IBM Z, designed to simplify life for developers, operations and architects. Now you can bring new applications to market quicker, leveraging IBM Z’s scale, security, and resiliency.

◉ Expanded access to IBM z/OS Container Extensions (zCX) enables clients to deploy a large ecosystem of open source and Linux on IBM Z applications on their native z/OS environment without requiring a separate Linux server (IFL). The latest open source tools, NoSQL databases, analytics frameworks, and application servers are all now easily accessible.

◉ IBM Secure Execution for Linux extends confidential computing to new heights through the implementation of a scalable trusted execution environment (TEE) on LinuxONE, which allows organizations to further protect and isolate virtual machines.

◉ Red Hat Ansible Certified Content for IBM Z is designed to automate z/OS applications and IT infrastructure as part of a consistent overall automation strategy across different environments using developer-friendly Ansible tools familiar with your teams.

Protect and keep your data private


Customers need to protect not only the security of their data but its confidentiality as well as it travels throughout the enterprise. Pervasive encryption was the first step towards enabling extensive encryption of data in-flight and at-rest, simplifying data protection while helping to reduce the costs associated with compliance. In addition, IBM offers protection of data privacy as data travels from your system of record to distributed and hybrid cloud environments.

◉ IBM Data Privacy Passports V1.0.1 now supports additional enforcement techniques, providing users with more options to access protected data.

◉ Cryptographic scale is a must-have in a cloud service environment to support a high volume of tenants. IBM z15 and LinuxONE III can now support up to 60 crypto hardware security modules (HSMs) and more domains, allowing for over 5100 virtual highly secured HSMs for ultimate scalability. Similarly, IBM z15 T02 and LinuxONE III LT2 can support up to 40 HSMs, for 1600 virtual HSMs.

Cyber resiliency lets you run with confidence


Clients realize that to protect their business, they must often fend off increasingly sophisticated threats, recover quickly from downtime, and meet unforeseen spikes in demand — all while delivering competitive service levels. IT resiliency gives clients the ability to adapt to planned or unplanned events while keeping services running continuously. System Recovery Boost now delivers new recovery process boosts to address a range of sysplex recovery processes, including sysplex partitioning, coupling facility structure and coupling facility data sharing member recovery, and HyperSwap recovery. With these enhancements, clients can expedite the return to normal operations and catch up on workload backlog after a sysplex event.

Flexible computing to make your life easier


New enhancements accelerate critical workloads and provide additional data center efficiencies:

◉ A new hardware accelerator for sort functions using a CPU co-processor called the Integrated Accelerator for Z Sort is designed to reduce elapsed time for critical sort workloads during batch processing and help improve Db2 database reorganization time.

◉ New IBM z15 T02 flexible physical configuration options set aside reserved space in the rack to integrate select storage devices such as IBM DS8910F and switches. Storage integration can help save space, which is perfect for clients who have smaller I/O configurations and can take advantage of running a smaller footprint.

The z15 and LinuxONE III continuous delivery process extends the innovations delivered earlier in the generation, enabling clients to exploit their IBM Z investments as they continue on their journey to the cloud. The security, resiliency and cloud-native capabilities of IBM z15 and LinuxONE III help clients to leverage their secure and reliable foundation for hybrid cloud, while also allowing them to respond to evolving business pressures — setting them up for success now and in the future.

Source: ibm.com

Thursday 6 August 2020

Pedal to the metal with System Recovery Boost

IBM Tutorial and Materials, IBM Study Materials, IBM Exam Prep, IBM Learning

Driving a car in stop-and-go traffic requires the ability to recover from a standstill by quickly regaining speed to stay with the flow. Similarly, in IT, the ability to pivot resources and workloads to meet business objectives and fluctuating demand is critical. With a continually increasing focus on services as a differentiator and changing regulations requiring companies to prove they can recover swiftly in the event of an outage availability cannot take a back seat. Even planned maintenance can mean service-level disruption, which can have an impact on customer experience. Simply put, your systems cannot afford to be down at all.

The latest enhancements introduced with the new IBM z15 follow on IBM’s longstanding commitment to high availability and are designed to help organizations strengthen their resiliency. System Recovery Boost is one of these enhancements– and it’s a key capability for those customers who cannot afford to miss service levels, especially in an increasingly competitive market. Let’s take a closer look at the possibilities it brings to your business.

Speed your processing  


Customers aiming to deliver exceptional response times often worry about outages impacting online service delivery. IBM System Recovery Boost is designed to provide temporary processing capacity boosts in z/OS partitions after an episode of downtime for the purpose of speeding up shutdown and starting up processing. For a fixed time period, it also delivers additional capacity to process the workload backlog that can typically build up during recovery processing. With System Recovery Boost, customers can apply capacity to process workloads to help achieve their pre-outage service levels twice as fast as what would have been the case without System Recovery Boost. After the boost period ends, the z15 image reverts to its normal capacity.

System Recovery Boost uses the computing power of general purpose and zIIP processors to deliver additional capacity, pooling these resources to drive the additional capacity to help you recover your pre-shutdown, steady state processing. It also helps GDPS enable more efficient automation to reconfigure and recover your environment quickly, which can help clients when they are especially vulnerable to downtime.

For z15 T02 customers who often use sub-capacity IBM Z machines, System Recovery Boost can be particularly beneficial, as these processors will then run at full capacity speeds during the boost period–without requiring manual intervention.

Speed Sysplex Recovery


When an entire sysplex is recovering from disruption, there can be a short-term impact to service levels as the sysplex system as a whole tries to achieve steady state. System Recovery Boost has recently been enhanced to deliver short-term “recovery process boost” acceleration during  sysplex recovery events. This boosted processor capacity can help restore normal sysplex operations as quickly as possible and can also  help provide capacity for workload “catch-up” from the recovery event. So, if a DB2 instance were to fail, for example, the surviving members of the sysplex would have to “pitch in” to perform recovery processing which could impact the service delivery of the entire sysplex.

This is where System Recovery Boost steps in to add capacity to sysplex rebuilds, including sysplex partitioning, coupling facility structure recovery or data sharing member recovery, and HyperSwap recovery to help automate the restoration of the sysplex.

Try it out


Flexible consumption models and financing are key to maintaining business continuity. With z15, IBM introduced System Recovery Boost Upgrade, an optional subscription feature that provides z15 T01 clients the option of turning on additional zIIP engines for boost purposes1.  Most recently, IBM is offering System Recovery Boost Upgrade for a no-charge 90-day trial for new or existing z15 T01 customers who have not purchased System Recovery Boost Upgrade. The trial is available as a self-service download through Resource Link (feature codes and completed paperwork are required)2.System Recovery Boost can help reduce the impact of downtime as experienced by your internal and external customers. And in today’s 24/7 environment, there is just no time for IBM Z to take a rest.

Source:ibm.com

Tuesday 4 August 2020

IBM device paves the way for 5G to reach full potential

5G, the next evolution of wireless communication standards, is already here – but it’s not possible to use it all the time just yet. A portable device and software stack developed by IBM that works with the millimeter-wave band of 5G can change that. The development is a huge step towards enabling 5G-smartphone users to continuously enjoy ultra-high data rates of 1.5 Gbps, and telecommunication service providers to have a much greater overall network throughput.

IBM Systems, IBM 5G, IBM Tutorial and Materials, IBM Exam Prep

The IBM Research SDPAR is 20x20cm and weighs just 2kg

To have 5G today, it’s not enough to just buy a 5G-compatible mobile device. Your provider must support the new standard, and you have to be in the parts of those cities that have 5G coverage. And even if you do all that, as soon as you turn a corner, chances are you’ll lose it again – and you’re back to slower 4G.

As coverage expands, 5G-enabled IoT and personal devices will crowd urban spaces. This may result in network congestion due to the interference between devices. To enable multiple connections to coexist, earlier wireless technologies separated connections into two domains: time and frequency.

But millimeter-wave 5G allows separation of wireless links in a third domain – space – enabling the protection, or ‘shading’ of a device from unwanted signals (even those present at the same time and at the same frequency). Making the most out of this third domain is key to realizing the full throughput potential of 5G. This is why we have developed a programmable spatial filter to block all but one of the in-band signals that permeate our wireless space, thus steadily keeping the desired 5G connection.

Looking for a candle amidst bright city lights


Transmitting and receiving radio waves in all directions is how cellular communication has always worked, from 1G of the 1980s to today’s much faster 4G. Cellphones and base stations communicate much as light bulbs provide illumination– radiating all around, with no or limited directional capabilities. A mobile phone transmits wireless signals everywhere, with the cellular base station catching just a fraction of the energy the phone sends. And when the phone is receiving, the base station radiates in all directions too, to your entire neighborhood. The result: you receive just a tiny bit of that radiation, and everyone around you receives unwanted interference. It’s like talking in a crowded bar – there’s no way to communicate with only one person without others around hearing you.

Millimeter-wave 5G technology, though, is different. 5G-enabled devices and towers send energy in a specific direction, like a flashlight directs light. And it’s possible to control the beam electronically, with algorithms pointing it exactly where it needs to point. When receiving, 5G phones should be able to block out unwanted signals, electronically shading the device – a process called nulling.

In 2017, together with Ericsson, we developed award-winning chips and antenna-in-package designs (28-GHz phased array antenna module, an industry first) – 5G phased array hardware that produces beams and shades. Ericsson has since integrated the system in its latest base stations, currently on the market. And Verizon is deploying such directional base stations in the US.

Still, these systems don’t block the unwanted interference perfectly. It’s similar to looking up and shading your face, say, only on the left side. There are typically just a few dozens of different options for beam directions or shading settings to sift through to pick the right one – typically 32 or 64 – not enough to achieve a perfect result. It’s like being in the city at night and looking for a candle far away amidst all the bright lights nearby. You’d have to go through about 10180 options to account for every single shading scenario – more than the number of atoms in the universe.

IBM SDPAR to the rescue


IBM Systems, IBM 5G, IBM Tutorial and Materials, IBM Exam Prep

We wanted to enable 5G research to go further. Enter SDPAR – a software defined phased array radio, a small portable device that can emulate a 5G-enabled base station or smartphone for research purposes. It weighs just 2kg, consumes less than 100W and can be powered with a small 12V adapter similar to those used for laptops. The device is easy to reconfigure and able to work with more than 10180 beam settings –moving the beam in thousands of directions in a fraction of a second. It can also evaluate link quality in each of these directions, and automatically choose beam settings that provide good connectivity. It needs only two data connections, for beam control and data input/output, and can be operated from a standard laptop.

We have shown that it’s possible to create a very large number of shading options with ease through a high-level application program interface (API). SDPAR enables the development and real-world evaluation of algorithms to navigate through this vast configuration space. In a matter of seconds, the device determines the best directions to shade, rapidly creating specific shading and brightening of areas.

The device can be used for other tests in future, for example to explore and develop 5G mm-wave beamforming and beam-steering algorithms and hybrid beamforming algorithms (paper # Mo4A-2 in https://rfic-ieee.org/technical-program/technical-sessions?date=2020-06-22), perform 28-GHz over the air testing as well as 28-GHz channel characterization, and develop and test custom digital basebands. And it’s possible to use SDPAR beyond 5G, too – for example, to explore and develop radar algorithms for sensing and imaging (paper # Mo3A-1 in https://rfic-ieee.org/technical-program/technical-sessions?date=2020-06-22).

But looking in one direction and trying to shade everything else out is only one way of configuring the array. We can shade better, in specific directions where bright interferers are located. The same applies to forming a beam. SDPAR can access 10180 different settings for beams and shades, creating complex shading patterns – to navigate this vast catalog of choices, we aim to use machine learning-based algorithms to quickly learn a scene and arrive at the best solution. Such AI-assisted beamforming techniques will enable us to take another leap toward fully harnessing the power of millimeter wave 5G – and allow smartphone users to consistently enjoy extremely fast data rates.

The new algorithms and AI-based techniques for 5G that can be developed with SDPAR are key examples of the emerging era of ‘acting on data at the source’ enabled by IBM’s solutions for 5G and edge computing.

Experiment


We set up an experiment at the IBM T. J. Watson Research Center to show how SDPAR and beamforming algorithms can overcome communication impediments in an interference-limited scenario.

IBM Systems, IBM 5G, IBM Tutorial and Materials, IBM Exam Prep

SDPAR experimental setup in the IBM Research ThinkLab

We use three SDPARs. one configured as transmitter, another one as a receiver, and a third one as an interferer. First, the interferer was turned off so that we could measure received energy (Fig. 1, left) and link quality (Fig. 1, right) for multiple choices of receiver beam direction.

We later set up the interferer to shine a bright signal towards the receiver, making it difficult for the receiver to decode the transmitted data. We found that if we shade all directions but one, the energy comes primarily from the left of the center where the transmitter is radiating – shown by the region of brighter dots that represent higher received energy in Fig. 1. The link quality in different receiver pointing directions is shown on the right side of Fig. 1. The link was good not only when the receiver was pointed in the direction of the radiating transmitter but also when pointed in other directions; the receiver was able to decode the data from the signal bouncing off the walls and ceiling and reaching the receiver. In this test, we only looked in 70 directions to emulate how the best 5G solutions today might work.

IBM Systems, IBM 5G, IBM Tutorial and Materials, IBM Exam Prep

Fig. 1: It is very easy to form a beam indoors when there are no interferers

If we turn on the interferer, the situation changes. We now see energy (Fig. 2, left) coming both from the transmitter located towards the left and from the interferer located towards the right. The interferer is a lot stronger than the transmitter and overpowers the receiver. This is clear when we look at the link quality (Fig. 2, right), which is poor in all 70 directions that the SDPAR was configured to try in this case. Unfortunately, current 5G solutions are still vulnerable to some of the interference challenges they were designed to overcome.

IBM Systems, IBM 5G, IBM Tutorial and Materials, IBM Exam Prep

Fig. 2: It is difficult to form a beam in the presence of interferers if we can only select from a small catalog of beam configuration options.

Next, we relied on the large control space of our SDPAR to evaluate the link quality in thousands of directions and try to see if there are some directions that allow us to form a link even in the presence of a strong interferer. The results for more than 20,000 directions are shown in Fig. 3.

On the right side of Fig. 3, one can see a few directions where we are able to achieve good link quality. These directions were completely missed earlier in the search through only 70 catalog options.

IBM Systems, IBM 5G, IBM Tutorial and Materials, IBM Exam Prep

Fig. 3: Good beam directions exist if we can search through a larger catalog

Is it possible to find the best direction without having to search through every single option? Our early algorithms look promising (Fig. 4) – we use an optimizer to find a direction, among tens of thousands, that achieves a good link by looking in as few directions as possible.

IBM Systems, IBM 5G, IBM Tutorial and Materials, IBM Exam Prep

Fig. 4: Using an optimizer, it is possible to find the best direction to look at without needing to go through all the directions.

Source: ibm.com

Saturday 1 August 2020

What an unexpected crisis teaches us about the role and future of IT

IBM Exam Prep, IBM Tutorial and Material, IBM Guides, IBM Study Material

Our mission in the IBM Systems Client Experience Centers — to help organizations with their pressing IT challenges — gives us a rare window into the immediate needs clients are facing today. When the global pandemic hit, business and IT leaders were asking us a lot of questions: How do I ensure business continuity? How can I secure my IT with the increase of remote services? What will happen after the crisis, and how can we be prepared? What if this happens again? Do we have to work on a new IT model? What can we do better?

In the Montpellier Client Experience Center, we’re used to leading with empathy and genuinely addressing our clients’ needs with technical expertise. It was always the case, but more than ever, we’re learning from each other, as many organizations are facing unprecedented challenges with supply chain resiliency and business continuity, cybersecurity and maintaining agility and flexibility in the face of the global pandemic and its effects. Here are some of the lessons we’re learning from recent discussions with clients across industries as they pivot to address a new reality.

Preparing for the unexpected


Pressures on business continuity have intensified this year. If you operate in an essential industry — such as healthcare or food supply — you’re facing unprecedented demand while striving to protect your workforce. And all sectors are grappling with the difficulties of keeping business running while transitioning to digital revenue streams. If COVID-19 is teaching us anything, it’s that unexpected crises can happen, and being prepared for unlikely events makes good business sense.

Companies must also be ready to adapt to changes in people’s behavior — both in the short and long term. In some ways, unexpected events can accelerate changes that were already in motion, and in others they can rewrite the rules completely. For example, the pandemic is fast-tracking the move to cashless payments. People are shopping online more, and the convenience may encourage them to continue to do so long after COVID. As a result, enabling continuity for digital services is critical for organizations.

Preparing for the unexpected also means having a reliable and automated security strategy. Unfortunately, cybercriminals take advantage during times of crisis to attack companies when they’re vulnerable. Consider the central administration for a hospital in Paris, which was targeted while it was dealing with emergencies. Building in security features at every layer of your infrastructure is essential.

Feeling the pinch


Businesses are likely to see their IT budgets come under greater scrutiny than before, with an emphasis on cost-cutting and efficiency. It’s important, though, to think of IT economics over the long term rather than just the upfront costs of a technology.

Business resilience includes processes, logistics, provisioning and people. Technology that’s able to hyper-consolidate workloads would allow you to save on data center resource consumption and footprint, both of which lead to cost savings. When thinking about the cost of your server technology, it’s important to consider consolidation, operability and reduced complexity. Is this server able to deliver large-scale Linux environments and operate millions of containers through an open, secured environment that can sustain hybrid multicloud architectures?

Many companies were already moving towards more automation, more control and more options for empowering system administrators to react to a problem. The pandemic will convince those who were delaying such a move to make resilience a higher priority. No two crises are identical, and the best plan is to have multiple alternatives to address multiple potential risk scenarios.

Looking to the future


One thing is for sure: IT will play a key role in shaping the world, just as it has for decades. AI is placing new demands on computing power, since it requires huge amounts of processing capacity to build and train models. Enter quantum, which is ideal for tackling multi-parameter problems. Quantum computing will likely contribute to overcoming some of the most pressing industry challenges in the future. For example, in the pharmaceutical industry, quantum could enable chemical simulations that help with the discovery of new drugs, better predict protein structures and identify the risks of a disease or its spread. The resolution of optimization problems could help optimize the chains of distribution of drugs, and the use of AI could speed up diagnoses and help us analyze genetic data more precisely.

Data centers of the future will be equipped with binary, biologically inspired quantum accelerators. Like an orchestra conductor, hybrid cloud will make it possible for these systems to operate harmoniously thanks to a layer of security and intelligent automation.

The IT industry has been in a significant transformation driven by data, AI and cloud, and recent events have demonstrated how essential IT is to keeping businesses running today and designing architectures for tomorrow. The unexpected teaches us that by bringing our talents and technologies together, we can find a faster route to IT innovation and navigate toward the future.

In the IBM Systems Client Experience Centers, we’re grateful to engage with clients and partners across a wide range of industries. We roll up our sleeves together to solve challenges by designing and delivering creative and timely solutions. Our mission gives us a window into what’s happening with IT across industries. We’d be pleased to help you innovate and elevate the value of your IT infrastructure.

Source: ibm.com