Tuesday 30 June 2020

3 ways to eliminate the pain of manual transactions with your smaller partners

IBM Study Materials, IBM Certification, IBM Guides, IBM Cert Exam, IBM Learning

The global pandemic has shown us that supplier collaboration and innovation have never been more critical – as is the need for trading partner networks to be more fluid. Companies are augmenting and switching suppliers at a faster pace in response to geographic boundaries imposed by the virus, workforce disruptions and dramatic demand mix changes. When agility and resiliency are key to survival during extended periods of volatility, you can’t continue to live with manual processes to onboard and transact with trading partners. The pain of error-prone manual data processing, and inefficient, paper-based communications – fax, email and physical documents –is too great and in some jurisdictions becoming obsolete.

To help you digitize and automate B2B transactions with every trading partner – including smaller partners that lack the technology and expertise to support EDI transactions – we offer the following solutions.

◉ IBM Sterling Supplier Transaction Manager, formerly IBM Sterling Web Forms Supplier Community

◉ IBM Sterling Customer Transaction Manager, formerly IBM Sterling Web Forms Customer Community

◉ IBM Sterling Catalog Manager, formerly IBM Sterling Web Forms Supplier Catalog and Customer Catalog

Your partners only need internet access to start connecting and collaborating efficiently and cost-effectively with you. Here are three ways these solutions help you and your smaller suppliers and customers work better together:

1. Automate the onboarding process. Automate registration and information gathering of suppliers and customers through the IBM Sterling Transaction Manager powerful web portal. There’s no need to manually contact partners, form partner agreements and collect information.

2. Digitize and automate transactions. Create, exchange and view documents without asking your partners to change their way of doing business. Sterling Supplier Transaction Manager allows you to submit documents, such as a Purchase Order (P.O.) in EDI format, via the web portal to your suppliers and converts into a format they can view and use. They can create and send back P.O. acknowledgements, advanced shipping notices (ASNs) and other documents, either uploading documents to the web portal or entering the data directly into an online form. The solution ensures compliance with EDI and business rules and handles the document conversion so that supplier documents can be brought into your systems for viewing and processing without manual intervention. Use Sterling Customer Transaction Manager to adopt the same processes in reverse with your smaller customers and gain similar efficiencies. Smaller retailers can generate a P.O. more easily in a format your systems can use, which streamlines your sales process.

3. Share product information easily. With Sterling Catalog Manager, smaller suppliers can share their product catalogs and updates with you, uploading detailed product information such as models, configurations and prices through a web portal. The solution generates an EDI product catalog or other structured file for use by your enterprise’s ERP system. You can also make it easier for your smaller, non-EDI customers to order products from you by distributing curated product catalogs via the web portal, including business rules and the ordering process.

As we move to the “next normal,” the ability to adapt to change, including churn and shifts in trading partner networks, will be part of every business’ strategy. Our focus at IBM is on supporting you as you accelerate your path to resiliency and agility. To help, we are offering eligible organizations special discounts on 12-month contracts for Sterling Transaction Manager.

Schedule a consultant to explore if these solutions are a good fit for your organization and how to get started quickly.

Monday 29 June 2020

Navigate supply chain disruption with an inventory control tower

IBM Exam Prep, IBM Tutorial and Material, IBM Prep, IBM Certification

The past few months have shown us that what was considered “good enough” inventory management is no longer sufficient. Even as businesses strive to reopen and emerge smarter, inventory remains uncertain because supply, demand and transportation capacity are in flux. The challenges that supply chain and fulfillment leaders lived with and worked around every day, are now amplified. For example:

◉ Inventory data is scattered across siloed systems in and outside of the organization, so teams don’t have the information they need, when they need it.

◉ Without tools to connect and correlate all inventory data and see the impact external events can have, it’s impossible to proactively address disruptions to inbound supply inventory, as well as reduce velocity of inventory turnover, amplifying the likelihood of losses from markdowns or leftover inventory.

◉ Manual, error-prone processes, like phone calls and email chains with teams across your ecosystem aren’t efficient when you need to make the best, inventory-related decisions quickly. Valuable employees can spend unnecessary cycles tracking down available inventory when their time could be better spent closing the sale, managing a customer relationship or providing care to a sick patient.
To help you more effectively manage inventory and build supply chain resilience, we are announcing the IBM Sterling Inventory Control Tower, a purpose-built control tower that you can tailor to meet your business needs. It correlates inventory data across siloed systems to provide insights into supply-demand imbalances and stock shortages. With key technologies like AI and machine learning, it can provide you with insights into the impact of external events that might cause disruptions and find alternate sources.

See a single, near real-time view of inventory.


Inventory is often siloed and distributed across your supply chain ecosystem. It may be stored in different internal systems, held by channel partners, in transit, or already committed. Sterling Inventory Control Tower functions as an integrated layer on top of these silos, so you don’t have to spend time and money unifying information from different systems and keeping it up to date. A personalized dashboard gives you a near real-time picture of inventory wherever it resides, so you can say “yes” to more customers. The more inventory you allow your customers to see, the more you can sell.

With accurate, scalable inventory views, you can meet peak-period demand and avoid over promising, losing sales, disappointing customers or consuming valuable employee time. Embedded AI enables natural language search, so you can ask questions like “How many face masks do I have across my hospital storerooms?” or “How many days of supply of ground beef do I have in stores in the northwest?” and get answers quickly. You’re not just managing inventory, but maximizing ROI with the insights you need to make decisions that reduce safety stock and carrying costs, and increase inventory turns.

Connect the data to predict disruptions.


In most supply chains it’s common to discover disruptions that cause inventory shortages or excesses after they’ve happened, leaving you with little time to mitigate business impact. AI-assisted insights connects and correlates internal and external data faster, alerting you to potential trouble spots before they happen, so you can plan and manage exceptions.

With accurate, near real-time available-to-promise inventory data, you gain confidence in your ability to meet customer promises. Out-of-the-box connectors for various data sources let you see the bigger picture and better predict the future – for example, if traffic congestion or a weather event today will impact inventory availability five days from now.

Collaborate across teams and partners to improve outcomes.


Sometimes meeting customer expectations is not as simple as shifting inventory around – conversations have to happen, and trade-offs need to be made. Resolution rooms bring together all the right experts from across departments and trading partners to agree upon one version of the truth and use best practices from the past. AI brings clarity to help you better respond to unplanned events, and with embedded machine learning, best practices are refined and brought forward for everyone in the organization to use.

With insights and priorities informed by downstream impact, you can make better decisions based on financial outcomes with sensitivity to how it will affect the customer. For example, you can allocate face mask inventory based on customer type – a healthcare provider versus a retailer. When all relevant team members can see the same big picture to understand current inventory availability – what’s coming and when, what you can expedite and can’t – it’s easier to determine with confidence the best course of action and then take steps the solve problems.

Supply chain inventory control towers can be used by organizations across industries to get real-time insights to see, manage and more effectively act on inventory visibility to meet actual and forecasted demand. A few of my favorite examples:

◉ A mid-sized healthcare provider is using an inventory control tower solution to provide a single view into supply and demand gaps. They can forecast depletion rates by SKU and stock location and predict consumable item usage based on scheduled procedures. They’ve enabled better collaboration across teams and partners to help respond to unplanned events. As a result, hospitals can save millions in wasted or expedited charges for replenishment. What’s more, by digitizing inventory management to reduce manual tasks, they’re able to free up healthcare workers’ time to focus on providing better patient care.

◉ A national grocery chain is under pressure to provide shoppers what they need — where, when, and in the quantity they need – while achieving cost goals. This is especially challenging as more shoppers move toward online purchasing. Using the Inventory Control Tower, the grocery chain is able to get near real-time, SKU-level visibility into store inventory to ensure the availability of goods, prevent ‘holes’ on shelves, and keep walk-in customers satisfied.

◉ An automotive manufacturer operates two repair centers and works closely with a network of authorized service providers. One of the manufacturer’s toughest challenges is to manage the demand for short-cycle order parts and resolve discrepancies between parts requests and parts availability. With accurate, near real-time inventory visibility into service parts by SKU and stocking locations across ERP and other systems, they’re able to help ensure critical parts are in stock to meet customer expectations.

IBM Sterling Inventory Control Tower allows you to say goodbye to “good enough” inventory visibility and start delivering on more customer promises.

Saturday 27 June 2020

Four ways to get ready for Power Systems Virtual Servers in IBM Cloud

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Prep

Are you investigating running your AIX or IBM i workloads in the cloud, but don’t know where to start? IBM Power Systems Virtual Servers in IBM Cloud provide significant value as an addition to your on-premises IBM Power Systems environment. Running AIX or IBM i workloads in the cloud makes it easy to use a pay-as-you-go model, handle seasonal bursts in computing demand without standing up hardware first, and transition from old hardware requiring expensive maintenance contracts.

If you’re intrigued by Power Systems Virtual Servers and wondering how to get ready for it, here are four tips for a smooth start with this new technology.

1. Know the solution


Some of the most common questions I get from clients are:

◉ How is IBM i or AIX running in the cloud?
◉ What does it look like?
◉ What is included, and what do I have access to?
◉ Where’s the documentation?

How is IBM i or AIX running in the cloud? AIX and IBM i workloads in Power Systems Virtual Servers are running in VMs (LPARs) on POWER9 processor-based hardware in IBM Cloud (existing POWER8 servers in some locations will be upgraded to POWER9). Those POWER9 servers are managed by the PowerVM hypervisor, virtualized with dual Virtual I/O Servers and NPIV-connected to Fibre Channel-attached storage. In other words, they’re using the same best practices as Power servers on-premises, but with the latest technologies (as of the time of this writing).

What does it look like? The best way to get an introduction is in this video. I recommend watching the whole recording, but the demo starts about 30 minutes in.

What’s included, and what do I have access to? Power Systems Virtual Servers are an infrastructure-as-a-service offering. What’s included is uptime of the underlying infrastructure and the operating system (IBM i or AIX) in each VM/LPAR with certain licensed program products. What you have access to is the IBM Cloud UI and APIs for managing the VMs and storage, as well as full access to the operating system within each VM. There’s no direct access to the Hardware Management Console (HMC), Virtual I/O Server or storage array.

Where’s the documentation? Start here and peruse all the topics in the “Learn” section on the left. Then proceed to the AIX- or IBM i-specific sections or the FAQ further down for specific answers.

2. Know your workloads


Consider the business criticality of your workloads, legal requirements and recovery point and time objectives (RPO and RTO). You want to start with applications that score lower in those categories. But let’s dig a little deeper. Power Systems Virtual Servers have minimum operating system (OS) levels for both IBM i and AIX (details here). You should create an inventory of likely migration candidates from a business standpoint and then compare them against the OS requirements. For those VMs/LPARs that don’t meet the requirements, it’s worth investigating the effect an OS upgrade would have on the applications running in those VMs. Last, consider the downtime available for each VM to migrate it, as some downtime will be required to save it on-premises, transfer it to and restore it in the cloud.

3. What about backups and HA/DR?


Appropriate backup and high availability/disaster recovery (HA/DR) options are must-haves for most AIX and IBM i workloads. Because backups are usually performed from within the OS, most of the options that exist on-premises are present in Power Systems Virtual Servers as well. However, the overall save and restore process has some differences and involves additional configuration and a bit of a learning curve. Keep in mind that physical tape drives or libraries are not available in the cloud. From an HA/DR standpoint, a key difference is that storage-based replication, which is the norm for many Power Systems clients, is not available with Power Systems Virtual Servers. HA/DR options for both IBM i and AIX involve logical or OS-based replication.

4. We have to talk about the network


How will these AIX or IBM i applications in the cloud be accessed? Do you need console-, interactive user- or application-level communication between your on-premises and Power Systems Virtual Server environments? Are you planning on using replication between your data center and the cloud for DR? These are all questions that will influence the network design and cost of your Power Systems Virtual Server solution. Multiple connectivity options exist. Finally, consider any non-Power Systems Virtual Server services or workloads you might want to use or have running in IBM Cloud. Having a complete multiplatform business solution in the cloud is one of the great benefits of Power Systems Virtual Servers, but it should also factor into your network planning.

Running IBM i and AIX in the IBM Cloud enables significant business opportunities for additional agility and growth. At the same time, it’s a new environment that carries a learning curve and requires thoughtful planning to use optimally.

Source: ibm.com

Thursday 25 June 2020

IBM and Verum Capital help businesses leverage blockchain

IBM Exam Prep, IBM Certification, IBM Certification, IBM Study Material

IBM and Verum Capital, a Swiss-based blockchain advisory boutique, are collaborating to help businesses accelerate the use of digital assets by scaling and securing blockchain solutions.

The rise of the digital economy


According to Research and Markets, the global digital asset management market is growing at 16.5 percent to reach $8.5 billion by 2025. In Switzerland, two banks are now fully licensed to offer blockchain-based financial instruments and traditional banks are increasingly offering crypto services. In the DACH region, more than 50 banks have applied for a license to offer similar services.

More and more, leading financial institutions are bringing traditional investment products onto blockchain. This shift toward decentralized finance is important for the future of the digital economy. By offering blockchain-based financial products, financial institutions can grow business through inclusion. At the individual level, blockchain-based services can support the unbanked but also the micro-investor who can gain access to the rate of return typically reserved for institutional investors with large sums of money. At the organizational level, financial products that are built using the blockchain become affordable and flexible enough to serve the SME segment. Now small businesses can also benefit from bond issuance on blockchain, for example.

Digitizing assets


Traditional company shares, bonds, loans or non-traditional and non-bankable assets such as artworks, real estate properties, private securities, even gold can be digitized, tokenized, and traded on a variety of platforms. An advantage of tokenizing assets is that it creates a fractional ownership scheme for physical assets that had previously been considered indivisible. The underlying asset could then be offered to a larger number of buyers who have increased liquidity on the secondary market. Tokenization allows the digital economy to become much more efficient, transparent, and liquid, while it also becomes a more inclusive marketplace.

Key success factors


Storing and trading digital assets securely using blockchain is a step forward in unleashing the transformative potential of the digital economy.

There are two key success factors for the widespread adoption of digital assets: hypercritical availability and scalability, as well as simultaneous industry-leading security.

Until now, this trade-off has presented a significant obstacle. Users had to choose between availability and security when handling their digital assets. IBM is solving this trade-off for financial services providers through cutting-edge digital asset custody that makes the location of the asset irrelevant (no matter where the asset is actually stored). The technology has incredible potential to improve digital asset availability without sacrificing security.

Verum Capital and IBM — strong synergies


IBM’s digital asset custody solutions are the basis for the collaboration with Verum Capital. Together, Verum Capital and IBM can offer clients highly secure and scalable digital asset infrastructure solutions, as well as the essential advisory services that will enable them to develop new business opportunities using blockchain.

As the leading blockchain advisory, Verum Capital is joining IBM’s unique ecosystem to ensure that these institutional-grade digital asset services can best be used in the financial services sector.

Since 1977, tens of trillions of dollars of wealth have been secured using hardware security modules (HSMs) invented by IBM. Featuring hardware security modules, IBM LinuxONE servers, for example, enable pervasive encryption of all application data in-flight and at-rest. They run IBM Hyper Protect Virtual Servers, a solution that provides a secure computing environment for highly sensitive data.

Once again, IBM is at the forefront of a technological solution for wealth storage and transaction, and together IBM and Verum Capital are the perfect fit for the positioning of this next generation technology.

Based on our extensive experience working on strategic blockchain projects within the financial sector,  we know how incredibly valuable it is for clients to have the choice to deploy a solution on-premises, as part of a private cloud environment, or as a service, allowing digital asset and blockchain firms to scale with the growing demand.

The essential infrastructure that IBM continues to develop is a very valuable building block for our team of blockchain consultants. When we advise our clients, develop pilot projects, and manage full-scale implementations of blockchain technology, we choose to work with IBM’s infrastructure and products because businesses are very comfortable relying on IBM.

Source: ibm.com

Monday 22 June 2020

Engineering Ecosystems Architectures

IBM Exam Prep, IBM Certification, IBM Tutorial and Material, IBM Prep

Looking back, it’s possible to remember the rise in interest just before Objects and Object-Oriented Design were formalised. At the time many professionals were thinking along similar lines and one or two were applying their ideas under other names, but, in the final analysis, the breakthrough came down to a single act of clarity and courage, when the key concepts were summarised and labelled in an acceptable way. The same was true with Enterprise Architecture. By the time the discipline had its name, its core ideas were already in circulation, but practice was not truly established until the name was used in anger. So, in many ways, naming was the breakthrough and not any major change in ideas or substance.

With hindsight it’s now clear that such naming events have little to do with changes in approaches to problem solving, but lots to do with the increasing scale and complexity of the challenges that drive them. Both are side effects of general technical progress. As we become more proficient with a tool or technology, we generally learn how to build bigger and better things with it. Be they bridges, skyscrapers or IT solutions, the outcomes are largely accepted as business as usual. What really matters is that the tools and techniques used for problem solving evolve in kind as demand moves onward and upward. For that reason, when a step change in demand comes along it is normally accompanied by an equivalent advance in practice, followed by some name change in recognition. For instance, IT Architects talk of “components”, whereas Enterprise Architects talk of “systems”. Both are comparable in terms of architectural practice but differ in terms of the scale and abstraction levels they address. In this way, IT Architecture focuses on solutioning at the systems level, whereas Enterprise Architecture is all about systems of systems.

Interestingly, the increases in scale and complexity responsible for Enterprise Architecture were themselves a consequence of advances in network technology, as platforms like the Internet and World Wide Web catalysed progress. This trend has continued with movements like social media having a profound impact on business. So much so that today conventional definitions of “the enterprise” are being challenged, as businesses enhance their capabilities with resources from outside their organisation. This is driving a need for solutions that reach beyond traditional enterprise networks and out into the broader digital ether. We are therefore seeing the next step change to face IT Architecture and again the profession is awash with emergent practice. We even have a name for it already. When networks spread beyond the influence of any single entity or system we generally call them an “ecosystem”. All that remains is for the realisation to dawn: the era of Ecosystems Architecture is upon us.

Enterprise Architects have been familiar with the idea of ecosystems for some time and the pressing need to describe networks of systems working towards shared goals. This is no longer a matter of debate. What is still up for discussion though is how we establish credibility in this new era? Past lessons point to some likely answers, and common sense reminds that tried and tested approaches tend to hold value when modified to embrace change. Nevertheless, this time around the change involved brings with it at least one significant difference. Whereas previously, building things bigger and better typically involved more cost, time and effort, advances in information technology have produced the opposite effect. Today, users expect their IT to be delivered ever quicker and at lower cost, and in response professional practice has evolved to become ever more agile and cost efficient. So, given that previous generations of problem-solving techniques have relied less on being reactive to changing demand, perhaps the jump to ecosystems architecture presents more of a challenge than at first sight?

It is therefore considered that a lightweight framework of pragmatic architectural best practices will be fit for purpose. At its core it provides a surprisingly elegant set of tenets that allow multiple delivery styles to mix and match without conflict:

1. Coverage: Ensure that the portfolio of connected parts is sufficiently covered and understood across all design and delivery activities. Depth is not as critical as breadth, but all parts (networks, systems, components, etc) must be identified and adequately positioned within any broader architecture.

2. Skills & Leadership: Ensure that adequate skill levels are available to address all identified needs. This applies across all known contexts and abstraction levels, and should safeguard technical direction across the entire systems lifecycle. Coverage should be at the significant part level (every justifiable part deserves a capable technical owner across its life).

3. Semantics: Create a scheme for naming parts at different abstraction levels and stick to it. Established convention, such as “network:arbitraryName”, “system: arbitraryName”, “component: arbitraryName”, etc, is perfectly acceptable.

4. Architectural Decisions: Ensure that an accurate record of technical choices is maintained and accessible. This is to ensure the viability of future decisions and help build a body of knowledge in support of future practice.

5. Operation: Ensure that justifiable method(s) and tooling is applied. This should provide guidance when needed and support essential feedback loops, such as decision making and knowledge harvesting.

6. Building Codes & Regulations: Ensure that architectural decision makers follow relevant guidelines, rules, and standards. These must comply with all applicable constraints, like external legislation, ethical consideration, security pressures and so on.

To understand the value these tenets might offer going forward, it’s perhaps worthwhile finally pausing to consider what IT architectures at the ecosystems level actually look like. The following list came out of some recent work at IBM and is not intended to be exhaustive. Nevertheless, it hopefully provides a taster of what lies ahead:

1. Functionality: Participating parts must be able to do something that contributes to all, or part, of the ecosystem

2. Communication: Participating parts must be able to communicate and co-ordinate resources with other parts, either within their immediate context (ecosystem, sub-network, system, etc) or outside of it (system, wider-network, complete-ecosystem, environment, etc). Parts must possess amplification and attenuation capabilities to suit

3. Controls: It must be possible to control the ways in which participating parts group (structure), communicate and consume resources in a common or mutually agreed manner

4. Awareness: An ecosystem must support instrumentation and observation at both participating part and higher-group levels. This includes implicit awareness of its environment, given that this can be considered as both a unit and group participant in the ecosystem itself

5. Regulation: An ecosystem must support constraints at the group level, that guide it to set policy and direction

6. Adaptability: Participating parts and higher-level groups must be able adapt to variance within the ecosystem and its environment

7. Support and Influence: Participating elements must be able to use resources supplied from other participating elements, groups and/or the environment itself

8. Self-Organisation: Structures must allow self-similarity (recursion/fractal) at all scales and abstractions

Source: ibm.com

Saturday 20 June 2020

Data protection for the modern world

IBM Exam Prep, IBM Certification, IBM Learning, IBM Tutorial and Materials

The need for effective modern data protection and cyber resilience has never been greater.

In the past few months there has been an escalation of cybercrime and cyber intrusion attempts.  Plus, cybersecurity priorities changed overnight and traditional flows of data have been dramatically altered, opening new avenues for cyber threats and increasing the challenges for data security and business resilience professionals.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Tutorial and Materials
These reasons highlight the importance of IBM modern data protection. Today, IBM Storage is announcing several new enhancements to its modern data protection portfolio. These new data protection capabilities enhance workload support in hybrid multicloud environments, including expanded support for cloud-based workloads, greater Kubernetes awareness and improved data retention on tape and object storage to improve cyber resiliency.

In particular, this release of IBM Spectrum® Protect Plus offers a wide range of new benefits:

◉ Enhanced data protection for Amazon Web Services (AWS). In hybrid cloud environments, enterprises can use IBM Spectrum Protect Plus deployed on premises to manage AWS EC2 EBS snapshots, or customers can deploy IBM Spectrum Protect Plus on AWS for an “all-in-the-cloud experience”. AWS workload support now includes EC2 instances, VMware virtual machines, databases, and Microsoft® Exchange.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Tutorial and Materials
◉ Enhanced protection for containers. Advanced SLA management to enable policy definition and assignment from the IBM Spectrum Protect Plus user interface. Developers use policies to schedule persistent volume snapshots, replication to secondary sites and copying data to object storage or IBM Spectrum Protect for secure long-term data retention.

The new release also enables developers to back up and recover logical persistent volume groupings using Kubernetes labels. This capability is key as applications built using containers are logical groups of multiple components. For example, an application may have a Mongo DB container, a web service container and middleware containers. If these application components share a common label, users can use the Kubernetes label feature to select the logical application grouping instead of picking individual volumes (persistent volumes) that make up the application. Developers can also back up and recover logical persistent volumes associated with Kubernetes namespaces.

IBM Exam Prep, IBM Certification, IBM Learning, IBM Tutorial and Materials
◉ Modern data protection for Windows file systems. With the Spectrum Protect Plus agentless technology, users can now back up, recover, reuse, and retain data in Windows file systems, including file-level backup and recovery of file systems on physical or virtualized servers.

In addition to the update to IBM Spectrum Protect Plus, IBM is also announcing enhancements to IBM Spectrum Protect and IBM Spectrum Copy Data Management:

◉ The latest IBM Spectrum Protect release includes retention set to tape, which is the ability to efficiently copy and store data on tape for cost-effective and secure “air-gapped” long-term data retention. Spectrum Protect also now enables users to back up the IBM Spectrum Protect database directly to object storage including IBM Cloud Object Storage, AWS S3, Microsoft Azure, as well as other supported S3 targets.

◉ IBM Spectrum Copy Data Management’s new release helps users simplify and improve SAP HANA Point in Time (PIT) recovery with native log backups using IBM Spectrum CDM and IBM Spectrum Protect for ERP. Prior to this release, users could recover data using hourly snapshots. With the new log support, recoveries can be much more granular.

Additionally, IBM is announcing a new licensing option for IBM Spectrum Protect Suite: Committed Term License. Like monthly subscription licenses, committed term licenses grant clients the right to use licenses in a cloud-centric model. Committed term licenses can be procured for a minimum of 12-months and up to a 5-year term. This new model will provide all of our clients with a lower-entry cost option, a flat price per TB and increased flexibility with no vendor lock-in after the term.

More than ever before, data protection and cyber resilience for vital data assets is on the minds and the agendas of business and IT professionals. As more applications and workloads move to containers and the cloud, enterprises need data protection and storage efficiency solutions with the technology and sophistication needed to enable the future of modern business.

Thursday 18 June 2020

How to manage complexity and realize the value of big data

IBM Exam Prep, IBM Guide, IBM Certification, IBM Learning, IBM Tutorial and Material

What would the world look like if every business decision was well documented and driven by big data? Rapid technical advancements in advanced analytics, AI and blockchain suggest we might not have to wait too long to find out.

With more than 150 zettabytes (150 trillion gigabytes) of data that will require analysis by 2025, it is a critical time for businesses to adopt and enhance big data solutions in order to meet the challenge with a competitive edge.

Studies show that more than 95% of businesses face some kind of need to manage unstructured data. The term “big data” refers to the processing of massive amounts of data and applying analytics to deliver actionable insights.

Advancements in artificial intelligence have helped big data technology progress beyond simply performing traditional hypothesis and query analytics. Now, the technology can actually explore the data, detect trends, make predictions and turn unconscious inferences into explicit knowledge that businesses can leverage to make better decisions.

AI opportunities


Big data presents an opportunity for machine learning and other AI disciplines. A few years ago, a Forbes study found that there were 2.5 quintillion bytes of data created each day. Digital transformations such as the Internet of Things have contributed to this unprecedented surge of data in recent years.

Since AI thrives on an abundance of data, it can help organizations gain new insights and personalized recommendations derived from unbiased IT data. For example, a leading open-source framework like TensorFlow can help improve the abilities of virtual agents by analyzing interactions in real time and helping virtual agents answer queries quickly and conversationally.

Big data + open source


Open-source software, which is available for free and highly customizable, plays an important role in big data. The technologies have been connected for years, used together to build customer behavior models for retail, anti-money-laundering initiatives for financial enterprises, fraud-detection protocols for insurance companies and even predictive maintenance for utilities providers.

The framework most commonly associated with big data is Apache Hadoop. For years, Apache Hadoop has made it possible for businesses to build big data infrastructures and perform parallel processing, using commodity hardware and lowering costs. That said, big data is far more than Hadoop alone.

As the age of digital transformation continues to surge, velocity and real-time capabilities have emerged as prerequisites for business success. To meet these new requirements, Apache Spark—a highly versatile, open-source cluster-computing framework—is often implemented alongside Hadoop to increment performance and speed.

Changing consumer habits are also driving a shift in the data mix, increasing the amount of unstructured data such as text, audio, video, weather, geographic data and more. The traditional data warehouse is evolving into data lakes and integrating data from Structured Query Language (SQL) and non-SQL databases, as well as multiple data types.

Don’t face the complexity alone


Open-source technology continues to dominate the IT ecosystem, largely due to its ability to innovate and quickly solve problems. This doesn’t mean that there’s no room for proprietary software or commercial offerings derived from open-source software. IT environments are growing more complex, and building big data solutions often requires the integration of multiple pieces of software. As such, a number of companies have begun to test, certify and create distribution-like solutions in this space. Still, big data has become a mature market, sufficiently proven by recent acquisitions that have considerably reduced choices for customers.

Keeping everything up and running in a successful environment requires you to deal with multiple pieces of software while also integrating new data sources. Because of this, many companies are embracing support solutions for open-source technology to reduce the complexity of their IT ecosystems with a single point of contact and accountability across the infrastructure.

A single source of support for community and commercial open-source software, running on cloud, hybrid cloud, multicloud or locally deployed systems, can help you meet complex support challenges, predict and resolve problems even before they occur, and realize the full value of big data technology.

Source: ibm.com

Wednesday 17 June 2020

Everything should go cloud now, right?

IBM Tutorial and Material, IBM Exam Prep, IBM Certification, IBM Prep, IBM Guides

From time to time, we invite industry thought leaders to share their opinions and insights on current technology trends on the IBM Systems IT Infrastructure blog. The opinions in these posts are their own, and do not necessarily reflect the views of IBM.

The state of cloud today


We are now at the stage where the majority of easy workloads have been shifted, representing an estimated 20 percent of compute moved across. We now enter the next phase. This is where the harder workloads will be migrated, those that will likely bring the most advantage to the business and the customer in moving to the cloud. This shift represents more than a simple technology change and will lead to new ways of working, where we shall see the reaping of the optimal benefits of cloud compute.

However, this will be the harder phase and remains the phase where many applications may be found too hard or inappropriate to port for potential benefits. New applications will still get provisioned that do not fit cloud and will remain on premises.

Digital transformation projects are often correlated with everything cloud. And yet, according to an Everest Group survey, due to this 73 percent of enterprises failed to provide any business value whatsoever from their digital transformation efforts.

A transformation is taking something from one state to another, to gain benefit and upside. A digital transformation purely aligns to it being a technical change; you change form factor from one technology approach to another, but this does not necessitate an improvement or transformation of the business process.

Too many assume a digital transformation is the process of moving from legacy on-premises to a cloud-based solution. It should mean a review of processes and technical approaches to gain maximum business advantage for both users and customers, and this may include both hosted and on-premises cloud as appropriate.

IDC stated that by 2020, 55 percent of organizations will be digitally determined, and if they are not, they shall be digitally distressed. We have seen a continued disruption of traditional organizations with many large brand names struggling, restricting, being acquired or simply going out of business through digital distress. They simply have not adjusted to the persona needs of their customer in delivering an omni-channel digital experience. Those lacking effective digitization with speed will risk their business legacy being marginalized or totally disrupted. Technology and its affordability are no longer the barrier; the willingness, capability and effectiveness of the digital change journey will be the determining factors for future success. The challenge facing most businesses now comes from selection of the right form factor or cloud type for the right compute need and reducing the risk that a selection now will provide a restrictive future lock-in commitment!

Applications are everything


This need is the driver of application-level compute demands, application selections driving the underlying platform, much like we experienced in the good old days when organizations found themselves with mixed UNIX, Novell Netware, Lan Manager, NT, and VM environments, driven by the applications selected and the operating systems required to run them. Mixed hybrid and multicloud environments have become the norm, not by design, but by osmosis. We must accept that it is a multicloud world that we will exist in and that the luxury to select one singular public cloud platform now and for the future is an assumed expectation.

One cloud doesn’t fit all


The right cloud for a specific application is determined by individual discrete metrics for that app with different app vendors offering varying integration levels for different platforms with different capabilities. This makes it nearly impossible to utilize one cloud platform across all application needs and not be restricting your future flexibility and freedom of choice to other application choices.

Multicloud is a must as we progress forward in a world where digital and first to market will increasingly distance the “haves” and “have nots”. The digital customer is demanding more of all providers, and the consumer will expect agility from their provider or simply have freedom of choice from those who deliver. Multiclouds have the ability to offer great flexibility. However, challenges of compliance, skill sets, development specifics, monitoring and security still remain factors to overcome. The cloud, the network and IT services are converging. Hybrid and multicloud are becoming the norm and a high percentage of workloads are still on-premises. With these in mind it is critical that any cloud transformation accommodates co-existence, flexibility and portability.

The right platform for the right need, the ability to co-exist more effectively and easily and to deliver competitive business advantage is key in today’s economy; however, based on the recent Forrester study 39 percent have achieved a loss of competitive edge as an IT organization, not a gain! What is also clear from the study and from my engagement with leading business clients is that delaying decisions and investment in refreshes and upgrades is not a saving but a hindrance and has far greater cost and negative impact on the business performance.

The Forrester study from IBM cited that key drivers behind the multicloud strategy of clients included: higher performance, flexibility, improved customer experience and the ability to support change. In today’s pressurized world with customers who demands more, employees who expect more, and a business likely built in the past, the ability to transform, adjust and become agile and open to more rapid change is critical.

Achieving this is not a one-size-fits-all fix and is certainly not an easy task that is solved by a single platform or vendor. Hybrid cloud strategies have developed too quickly, become the norm, and have been accepted as the appropriate model for the forward-thinking organization.

Businesses from the largest to the smallest firms are finding that despite the promised land of single vendor-sourced cloud offerings, that in reality the mixing of cloud platforms across Saas, PaaS and IaaS from multiple vendors is quickly becoming the norm.  In order to deliver the maximum upside outcome for the business there is a growing understanding and receptiveness that the true cloud world . . .  will be a hybrid and multicloud world.

Tuesday 16 June 2020

IBM Big Data Architect | The Art of Handling Big Data

IBM Big Data Architect, IBM Big Data Architect certification, big data architect, data architect

An IBM Big Data Architect is a more critical role. It is a natural evolution from Data Analyst and Database Designer and reflects the emergence of Internet Web Sites, which want to integrate data from different unrelated Data Sources.

Successful data architecture provides clarity about every phase of the data, enabling data scientists to work with trustworthy data efficiently and solve complex business problems. It also prepares an organization to quickly take good of new business opportunities by leveraging emerging technologies and increases operational efficiency by managing complex data and information delivery throughout the enterprise.

IBM Big Data Architect: Thinking Outside the Box

On a practical level, an IBM Big Data Architect will be associated with the entire lifecycle of a solution, from an analysis of the elements to the form of the resolution, and then the development, testing, deployment, and governance of that solution. They must also wait on top of any upgrade and maintenance requirements. But through it all, they must be creative difficulty solvers.

A love of data is a requirement for a role as a Big Data Architect for sure, but so is the capacity to create outside the box. Research the skills required to be a Big Data Architect. You will see many references to the value of creative and innovative thinking, mainly because a Big Data Architect is accountable for coming up with new ways to tackle new problems.

There is not any textbook or user manual that will provide the answers because this world of data we now live in, and the competitive environment that requires businesses to put that data to use in real-time requires new solutions. A typical day involves working with data, but often in an innovative, analytical approach.

An IBM Certified Data Architect - Big Data must be able to recognize and assess business requirements and then translate it into specific database solutions. This includes being responsible for physical data storage locations data centers, and the way data is organized into databases. It is also about maintaining the health and security of those databases.

Leadership skills are required for data architects to establish and document data models while working with systems and database administration staff to implement, coordinate, and support enterprise-wide data architecture. Data architects also can be responsible for managing data design models, database architecture, and data repository design, in addition to creating and testing different database prototypes.

What Does It Take to be an IBM Big Data Architect?

Below are a few critical qualifications for an IBM Big Data Architect:

  • High level of analytical and creative skills.
  • In-depth understanding of the methodology, knowledge, and modeling of databases and data systems.
  • Excellent communication skills.
  • Capacity to plan and organize data experts efficiently.
  • Working knowledge of network management, shared databases and processing, application architecture, and performance management.
  • Bachelor’s degree in a computer-related field.
  • Experience with Oracle, Microsoft SQL Server, or other databases in various operating system environments, such as Unix, Linux, Solaris, and Microsoft Windows.

IBM Big Data Architect Job Description

Data architects create databases based on structural requirements and in-depth analysis. They also thoroughly test and have those databases to ensure their longevity and overall efficiency. This is a full-time role that needs excellent skills and education. Data architects often work at a computer in a traditional office setting, and they typically report immediately to a project manager. Successful data architects are analytical and consistent in everything they do.

Like most technology jobs, technical experience is helpful, if not required. Data architects must also be business-minded, so experience in an essential nontechnical role could boost your marketability for this in-demand position.

IBM Big Data Architect Duties and Responsibilities

The data architect’s duties and responsibilities may differ depending on the industry in which they work. Based on our research of current job listings, most data architects perform the following core tasks:
Assess Current Data Systems:
  • Data architects are responsible for assessing the modern state of the company’s databases and other data systems. They analyze these databases and know new solutions to enhance efficiency and production.
Define Database Structure:
  • Data architects describe the overall database structure, including recovery, security, backup, and capacity specifications. This definition gives way for data architects to manage overall requirements for database structure.
Propose Solutions:
  • After analyzing, evaluating, and explaining current database structures, data architects create and offer solutions to upper management and stakeholders. This often involves designing a new database and presenting it to affected parties.
Implement Database Solutions:
  • Data architects are responsible for completing the database solutions they propose. This entire process includes developing process flows, agreeing with data engineers and analysts, and documenting the installation process.
Train New Users:
  • Since data architects are the experts in the new database solutions they design and complete, they make the ideal trainers for new users. They may also be responsible for encouraging new data analysts and data engineers.

IBM Big Data Architect Salary and Outlook

The median annual salary for data architects is $112,825. This salary increments depending on experience and advanced education. The top 10 percent of data architects make as much as $153,000 per year, while the bottom 10 percent make as little as $74,463. Data architects are also available to get great benefits through their employers, such as insurance with premiums fully paid and unlimited time off. They may also receive incentive-based bonuses.

The database administrators, a field very similar to data architects, will experience 11 percent growth in the next decade. Database-as-a-service is growing in popularity, which only raises the need for more data architects as companies expand.

Rewards and Challenges of a Data Architect Position

A career as an IBM Big Data Architect can be gratifying. It is a great paying position and is in high need as information management becomes more critical to the business. Data architects view the fruits of their labor as they manage data management systems that they helped design and develop. The ongoing support of data systems remains dynamic as data needs within the company change, creating exciting and current challenges to the data architect.

The position has its difficulties as well. The data architect works as a part of a team. Since data is such a fundamental part of the operation of the organization, any problems with the data management system can become critical. Problems must be done quickly and under pressure, creating a stressful environment.

Summary

A career as an IBM Big Data Architect is a rewarding career needing professional development and a wide range of essential skills. Though becoming a data architect can take some time, the career opportunities are well worth the effort.

Cost and value transformation in an economic downturn

IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Prep, IBM Learning

On March 15, the Dow dropped nearly 3000 points in a single day; the worst day of trading since 1987. Since then, the global economy has come to a near standstill and has been on a steep economic downturn after years of strength. What can companies can do today to focus on cost transformation while many industries face upheaval?

In this quick Q&A, two leaders in IBM Services, Eric Pilkington and Lucas Manganaro, articulate how companies should improve and automate workflows, which can lead to an upskilled workforce and better efficiency, and how prioritizing customer needs and transforming the way companies generate value can have positive impacts to overall company cost transformation strategies.

Question #1: In response to the economic downturn, there has been a lot of discussion on enterprise “cost transformation.” What does this term mean to you?


Eric: Every business wants to strike a balance between cost, price, efficiency, and elasticity. The ability to automate certain tasks that will enable a company to deploy less human capital holds value for companies. When it comes to true cost transformation, we need to discuss cost and value transformation – a way to cut costs without negatively impacting the customer experience. In this case, value is defined as value to the business and value to the customer. Often, I have seen leaders engage first on cost cutting and second on value while undergoing a digital transformation. For example, if a company over-automates and solely relies on technologies and data without accounting for upskilling or reskilling their employees, the value to the business and customer experience could actually decrease in the long run.

This downturn is already causing layoffs, which will drive further investment in automation, which will reinforce new ways of working, all in an effort to cut costs. As we embark on these cost cutting exercises so necessary to many companies today, we should remember to strike the balance between value and cost transformation.

Question #2: What is top-of-mind for clients today with regards to applying cost transformation and intelligent workflows in an economic downturn?


Lucas: In the wake of this economic downturn, companies have changed the way they do business, with most employees working from home. Despite this, most clients that I’m speaking with are focused on maintaining business continuity as much as possible. We should remember to lead with Emotional Intelligence and then follow with IQ, since individuals are now packing in additional work into a less efficient working situation.

In this rapidly changing environment, CEOs are shifting their attention to prepare for revenue contractions, and therefore focusing on using data to understand the pieces of their business that remain core vs. not core to their bottom line and customer value. Leaders want to use data from their digitization initiatives to inform their decision-making. However, they’re quickly finding that while the data exists, it is not readily available to drive real-time action. This only further stresses the need to intelligently digitize data in a way that can be impactful and available to business leaders across the organization.

Question #3: What steps can companies take to reduce cost and improve process efficiency during this new economic environment?


Eric: First, as a leader, ensure that you have created and communicated an organization-wide “North Star” and confirm that all parts of the business effectively map back to that instead of perpetuating existing siloes. If this is done effectively, we can then look across the organization, understand how different pieces interconnect, and identify gross inefficiencies to improve end-to-end processes and ultimately begin the cost transformation. Equally importantly, we will also be able to identify opportunities to improve the customer experience.

Lucas: The first thing I advise: conduct an initial analysis to understand where your company spends money and what is currently in your contracts. From there, you can decide what work is critical vs. not critical. You can then determine secondary and tertiary levels of criticality when it comes to delivering your product from a value and cost perspective.

Some companies are currently facing a spike in demand in the wake of this pandemic (think delivery, Amazon, food retail). These companies face challenges in meeting that demand with the right mix of inventory. In this example, leaders should focus attention on understanding the pivot required in inventory planning and look further upstream and downstream in the supply chain to ensure that customer expectations are met.

Alternatively, many other companies are seeing a complete drop-off in demand, and therefore should take a different approach in the immediate term. Leaders must make difficult decisions to know what pieces of the business can be shut down to minimize costs, and they must ensure that the final finished inventory of product is excellent to maintain customer value. Access to intelligent data is critical during this exercise to ensure that leaders create some financial stability for their companies and for suppliers.

Bottom–line: What do you need to do today?

Recognize that as a business leader or as an employee, you do not need to solve for everything tomorrow. We are in unprecedented times, and while we may be doing internal ‘rocket science,’ we do not have to build the entire rocket at once; we still build it one bolt at a time. So, the next step is to pick up the hammer and start chipping away, beginning with understanding your data and making tough decisions.”

Source: ibm.com

Monday 15 June 2020

Hybrid multicloud infrastructure provides agility to adapt to a crisis

IBM Exam Prep, IBM Guides, IBM Tutorial and Material, IBM Certification, IBM Prep

Today, more companies recognize the unquestionable value that cloud can deliver for mission-critical applications — the flexibility to quickly test and implement solutions. The shift was inevitable, but challenges like the current pandemic are accelerating the change. An open hybrid multicloud platform provides the level of agility needed today, as well as the operational gains for the future.

Many organizations now realize that there’s more risk to holding back on cloud than to actually making the move. The shift to public cloud and cloud-native predicted to occur over the next decade or two may now take place over the next two or three years.

Remote working is no longer a complementary support strategy. Telecommuting is mainstream, and organizations with cloud services and infrastructure have experienced a relatively seamless transition to this new way of working.

IBM Exam Prep, IBM Guides, IBM Tutorial and Material, IBM Certification, IBM Prep

Additionally, these enterprises are able to continually re-examine workflows and identify opportunities to implement collaborative tools, reimagine processes to meet customers’ urgent needs and automate workflows for efficiency and speed. The resulting changes can positively impact an organization’s bottom line.

Leading by example: GEICO


As the second largest auto insurer in the United States, GEICO has a mission to offer the most affordable automobile insurance product while providing outstanding customer service. At THINK 2020, GEICO Director of Cloud Products and Services Fikri Larguet talked about the company’s move to cloud.

“If I look back on our cloud journey, it started with the quick realization that in order to achieve agility, availability and innovation, we could not continue investing in the management and operation of our own data centers,” said Larguet. “The need for scale and continued expansion of our infrastructure footprint dictated that we move to a more consumable and intelligent cloud model. This move has allowed GEICO to focus on competitive advantage, which is essentially the business of developing great insurance products.”

GEICO has moved significant portions of its workloads to various public clouds in recent years. By focusing on building self-service capabilities for its DevOps teams, the company is harnessing the power of open standards to maximize reuse and efficiency.

“To put it simply, cloud adoption has offered GEICO the possibility to transform our business in delivering product and services faster and meeting our customers’ demand for an effortless and secure experience when engaging with us,” said Larguet.

When asked how COVID-19 has impacted GEICO’s move to cloud, Larguet’s perspective was clear. “This crisis only reinforced and validated GEICO’s cloud strategy,” he said. “It was interesting to note that those companies that have navigated this crisis successfully were the ones that achieved a solid maturity in their cloud presence and invested in growing a team with strong cloud expertise. GEICO was up for the challenge.”

Source: ibm.com

Saturday 13 June 2020

To help plan for a return to work, businesses should consider leveraging IoT insights at the edge

IBM Exam Prep, IBM Tutorial and Material, IBM Prep, IBM Certification

In the current climate, many businesses have pivoted to a remote workforce, but this option may not be equally accessible across industries. Organizations that operate by serving customers and citizens – like manufacturing, retail, healthcare, and government – are less likely to be able to maintain full operations remotely.

Businesses in these industries are likely to be eager to quickly and, above all else, safely plan for employees to return to the workplace following the guidance from national and local governments. In 2018, An estimated 12.7 million workers were employed in the US manufacturing sector, and, about 1.14M workers in the US warehouse & storage industry, with more employed across retail, healthcare, and government.

How can organizations support a return to the workplace given the COVID-19 crisis?


In addition to following the return to work guidelines issued by CDC and federal, state, and local governments, to help support a return to the workplace, businesses should consider leveraging technology solutions – like edge computing, 5G and IoT –  to gain insights that can help protect employee health and promote workplace safety. Edge computing can be a strong alternative to cloud processing for near real-time data access.

Employee Health


◉ Employers will need to rethink their strategy to help protect their employees’ health and safety while at work.  One option to consider is to leverage technology and systems, such as:

◉ Infra-red (IR) cameras set up at key entry points can screen individuals with higher basal body- temperature in order to quickly detect possible fevers.
Connected wearable devices can monitor employee health factors such as oxygen level, heart rate, blood pressure changes, and respiration rate.


Workplace Environment Safety


Operations-intensive areas such as manufacturing shop floor, warehouses, distribution centers, and commercial office spaces can often be congested, so leveraging technology to glean insights about the status of the work environment can be helpful.

◉ Optical cameras can help identify increases in the crowd density of certain areas, and notify if there is a breach of preset business rules such as the number of people limited to each zone.

◉ Bluetooth beacons can help detect proximity of employees to one another based on social distancing norms in the company premises.

◉ Leveraging the insights from multiple locations can help identify areas of needed focus as well as leverage best practices back across the enterprise.


Edge computing is a strong option


Real-time data access is key in order to quickly identify potential safety concerns in the workplace. In this scenario, edge computing can potentially be a stronger option than cloud processing. If data must be sent to a central cloud, analyzed there and returned to the edge, there can be a lag in data processing, high data cost, and need for always-on internet connection. If businesses utilize edge computing there can be potential advantages –

◉ Businesses can capture and analyze data locally

◉ Eliminates the need for storing a lot of data on cloud, which can potentially result in lower operating costs

◉ Reduces the need for huge real-time event processing on cloud

◉ Sensor data is continuously monitored, not stored, and the system triggers an alarm only if it detects an anomaly

For organizations where remote operations is not an option and are planning to bring workers back under the proper guidance, consider technology when developing your strategy.

Friday 12 June 2020

Six recommendations for launching or expanding your virtual agent in this crisis

IBM Exam Prep, IBM Tutorial and Material, IBM Prep, IBM Certification

As we work together to respond to our current crisis, artificial intelligence (AI) is a force multiplier, helping us effectively and efficiently navigate this overwhelming storm.

Artificial intelligence (AI) virtual agents are not new news. Today they are being reimagined to help citizens and workers access trusted data sources and to help get the answers they need. How many infectious cases are in my area? How do I get tested? What should I do if someone in my household is sick? When should I go to the hospital?

Both public sector agencies and private sector organizations are building on these capabilities to rapidly train AI on policy. What is my organization doing to respond to this crisis? What happens if I need to take an unscheduled day off? How much sick leave can I take? Organizational policy is rapidly evolving, each week is a new frontier as we learn, together, how to navigate an unprecedented global pandemic. As organizations amend their business policy, the AI can be quickly updated to reflect these new changes, becoming the single true source of data for an organization and its entire workforce.

AI virtual agents are simple yet powerful solutions. In early deployments around the world, we are seeing a significant decline in the volume inbound inquires to call centers, which means shorter waiting times for those critical cases. When workers get answers from AI, this means more time connecting with managers and colleagues on mission-critical initiatives. By analyzing the AI interactions, leaders have deeper insight into their workforce and use the data to help prioritize their crisis response strategy.

Whether you are launching a new virtual agent or expanding the capabilities an existing one, here are six recommendations:

1. Keep your employee personas at the heart of your design. Understand the needs of your employees and prioritize those when building AI capabilities.

2. Balance security requirements with access and leverage the cloud. Take a hard look at the need to access confidential information and balance that with the need for the AI to be available anytime, anywhere, on any device.

3. Start small and iterate quickly. Consider a prototype with limited functionality, deployed to a smaller group of users. Scale quickly to a broader set of users and expand the AI’s capabilities through frequent updates.

4. Consider integrating publicly available data from trusted sources. Public health organizations, government agencies, and educational institutions are making large amounts of information available. Train your AI to use these data sources or provide links to additional sources of information.

5. Make one of your senior leaders the solution owner. Your chatbot team needs near-real-time insight into the changing organizational policies, in order to adapt quickly. Select someone who is a focal part of the leadership team as solution owner, helping minimize the real-time between policy decisions and training the virtual agent.

6. Use the AI data. Take your cues from your workforce. The questions asked are the topics of most interest or concern. Adjust your crisis response strategy accordingly.

Thursday 11 June 2020

Building operational resiliency for anytime, anywhere and any situation

IBM Exam Prep, IBM Guides, IBM Tutorial and Material, IBM Prep

History is riddled with unexpected challenges and, more importantly, the creative ways people rose to overcome them. Difficult times, like we face today, call for businesses to reconsider transformation of workflows, workforces and workplaces to both recover from current disruption and to ensure preparedness for the next challenge — whatever it may be.

Shared services organizations must continue to deliver productivity, controls, costs savings, user experience, business insights and operational resiliency. We work with our clients to deliver these business outcomes with a combination of intelligent workflows, smarter operations and reimagining how work gets done.

◉ Intelligent workflows reimagine the end-to-end workflow, interact with data trapped in disparate IT systems and apply innovative technologies like automation and artificial intelligence to boost productivity and deliver totally different experiences for stakeholders.

◉ Smarter operations start with agile, self-driven and motivated cross-functional teams. Teams have virtual stand-up meetings to address daily work tasks, while the technology teams work in parallel to implement solutions from ideas generated by agile teams driving continuous improvements. Control tower enables management and team leaders with prescriptive insights to monitor early warning indicators, orchestrate change in real time, and develop iterative and proactive change management.

◉ Borderless workplaces embrace flexible possibilities now available that ensure safety and security for services providers and their employees to be significantly more resilient, responsive and proactive in delivering services at even higher level of productivity and effectiveness.

Together, these elements can help businesses be responsive to their employees, suppliers and customers, as well as adapt and respond to potential disruptions.

Supporting clients for immediate operational continuity

At IBM, we are proud of the way our teams around the world have stepped up to the challenges of COVID-19.

◉ In 10 days, our delivery teams in 60 centers across 40 countries to shifted to work from home.

◉ Now, more than 99% of our teams have safely and effectively transitioned to working from home to support client operations.

◉ There has been zero degradation in the delivery of services, such as responding to employee queries, running payroll, recruiting critical staff, supporting month-end financial close, paying suppliers, and performing risk and compliance reviews.

IBM Exam Prep, IBM Guides, IBM Tutorial and Material, IBM Prep
Automatic Data Processing, Inc. (ADP) provides human resources management software and services and handles one in every five paychecks in the Unites States. Given the sensitive nature of this work, there was naturally some concern about security and the risk associated with working from home when the COVID-19 crisis hit. The IBM Services team in Naga, Philippines worked with ADP to implement daily agile standups and helped the company make changes and prepare new plans for business continuity. Within two weeks, 98% of the Naga City team was working remotely with portable laptops, physical desktops and Wi-Fi dongles.

Another client, a large telecommunications company, might have been hit hard by shelter-in-place orders, as 100% of the company’s employees worked in its delivery centers across multiple countries. With the support of the IBM Services team and a new end-to-end workflow orchestration program, the company shifted to 100% work-from-home and reported no interruption in service to presales. And, given the spike in demand for telecommunications, the company saw an 18% increase in volume. The company was able to accommodate the demand and bring backlogs to an all-time low — even with increased volume.

But supporting clients in a time of global emergency isn’t always a neat start-to-finish job. When one of North America’s largest financial institutions found finance business processes suddenly threatened by a challenging environment and an overwhelmed provider in the wake of COVID-19, the IBM Services team quickly stepped in to help avoid service interruption. The IBM Services team accelerated the ramp-up process and facilitating security access, identification credentials and knowledge transfer to ensure the client had the necessary support they needed. The transition was complete within eight days and the organization reported no service interruption.

Executing for the future


The key question for business now is: what’s next? What does the future look like? In a period where we look forward to recovery and stability, it is critical to think about how business process services teams can continue to meet expectations of productivity, controls and cost reductions, while providing experiences and insights with higher levels of resiliency.

Our approach with intelligent workflows, smarter workforces and borderless workplaces can help shape the transition from immediate needs to sustainability and future planning. We are working with clients to improve cash flow right now, reduce operating expenses within months, drive ongoing transformation and enable resilient operations.

We will continue to support our teams and clients through this crisis, where everyone’s safety is paramount. IBM is looking to the future, planning for the significant changes and opportunities ahead when we return to the “new normal” in the coming weeks. Business continuity in a crisis doesn’t change the fact that there is a crisis, but a strong foundation of resiliency and business continuity planning can help to minimize the impact for employees and customers alike.

Source: ibm.com

Wednesday 10 June 2020

Cloud-based communications are here: Is your enterprise ready?

IBM Exam Prep, IBM Guides, IBM Tutorial and Materials, IBM Certification

These days, many of us depend on communications tools such as chat, video conferencing and softphones, and the global pandemic has put that into overdrive. Flexibility and remote connectivity are critical, and organizations need cloud-based communications to ensure both. In fact, over half of HR leaders in a recent Gartner snap poll indicated that poor technology or infrastructure is the largest barrier to remote working.

To overcome the technology challenges necessary to support remote employees and help ensure business continuity, enterprises must create a digital workplace that runs on well-orchestrated networks and cloud-enabled communications tools. Moving network infrastructure and communications technologies from physical to virtual environments is key.

That’s where Unified Communications as a Service (UCaaS) comes in. It’s delivered through the cloud on a reliable, security-rich network, and designed to provide a seamless experience across communications channels. Imagine our communication with colleagues being even more personalized and collaborative.

UCaaS also supports that same fluid communications experience across devices. For example, start a chat on your mobile phone, then switch devices and smoothly transition to your laptop and continue chatting.

A shared, collaborative experience


Cloud-based unified communications go beyond delivering the latest technologies, providing employees — especially those of us working remotely — with a more cohesive experience. That can translate to enhanced collaboration and productivity that provides a better work experience overall.

At many organizations, IT and internal communications teams are joining forces to make UCaaS readily available. Remote deployment with no-touch provisioning gets teams up and running quickly.

“UCaaS adds tremendous business value. By putting one suite of tools on the cloud, employees across the enterprise have access to the same communications tools from any connected location or device and a much deeper collaboration experience,” said IBM Services Global Offering Manager Rick Key. “We see clients go from struggling with using different tools and not collaborating well internally to really transforming communication and productivity.”

Communication solutions for industries


UCaaS can be used in virtually any industry, including retail, healthcare and government. Take large retailers, which need a way to manage communications technology across digital, store and supply-chain operations.

“There’s an enormous amount of complexity involved with retail IT infrastructure that many don’t realize, including networking requirements, communications tools and a multitude of phone lines in the stores,” said Key. “Centralizing IT and communications in the cloud — ‘a store in the cloud’ — helps retailers streamline and eliminate much of that complexity. This can result in increased flexibility, scalability, quicker adoption of new functions and significant cost savings.”

Additionally, if a company has a large number of sites and wants to upgrade to a software-defined networking wide area network (SD-WAN) infrastructure, they can consider UCaaS for conceivably substantial savings in deployment costs.

Scalable for the future


As enterprises grow and the current demand for work-from-home collaboration intensifies, cloud-based unified communications offer an effective, scalable solution.

“UCaaS in the cloud helps enterprises transform network and communication infrastructures from physical to virtual environments that are used in a consumption-based model,” said Key. “It offers the holistic communications experience that employees expect, and the cloud economics that companies need.”