Showing posts with label Security. Show all posts
Showing posts with label Security. Show all posts

Wednesday, 16 October 2024

How well do you know your hypervisor and firmware?

How well do you know your hypervisor and firmware?

IBM Cloud Virtual Private Cloud (VPC) is designed for secured cloud computing, and several features of our platform planning, development and operations help ensure that design. However, because security in the cloud is typically a shared responsibility between the cloud service provider and the customer, it’s essential for you to fully understand the layers of security that your workloads run on here with us. That’s why here, we detail a few key security components of IBM Cloud VPC that aim to provide secured computing for our virtual server customers.

Let’s start with the hypervisor


The hypervisor, a critical component of any virtual server infrastructure, is designed to provide a secure environment on which customer workloads and a cloud’s native services can run. The entirety of its stack—from hardware and firmware to system software and configuration—must be protected from external tampering. Firmware and hypervisor software are the lowest layers of modifiable code and are prime targets of supply chain attacks and other privileged threats. Kernel-mode rootkits (also known as bootkits) are a type of privileged threat and are difficult to uncover by endpoint protection systems, such as antivirus and endpoint detection and response (EDR) software. They run before any protection system with the ability to obscure their presence and thus hide themselves. In short, securing the supply chain itself is crucial.

IBM Cloud VPC implements a range of controls to help address the quality, integrity and supply chain of the hardware, firmware and software we deploy, including qualification and testing before deployment.

IBM Cloud VPC’s 3rd generation solutions leverage pervasive code signing to protect the integrity of the platform. Through this process, firmware is digitally signed at the point of origin and signatures are authenticated before installation. At system start-up, a platform security module then verifies the integrity of the system firmware image before initialization of the system processor. The firmware, in turn, authenticates the hypervisor, including device software, thus establishing the system’s root of trust in the platform security module hardware.

Device configuration and verification


IBM Cloud Virtual Servers for VPC provide a wide variety of profile options (vCPU + RAM + bandwidth provisioning bundles) to help meet customers’ different workload requirements. Each profile type is managed through a set of product specifications. These product specifications outline the physical hardware’s composition, the firmware’s composition and the configuration for the server. The software includes, but is not limited to, the host firmware and component devices. These product profiles are developed and overseen by a hardware leadership team and are versioned for use across our fleet of servers.

As new hardware and software assets are brought into our IBM Cloud VPC environment, they’re also mapped to a product specification, which outlines their intended configuration. The intake verification process then validates that the server’s actual physical composition matches that of the specification before its entry into the fleet. If there’s a physical composition that doesn’t match the specification, the server is cordoned off for inspection and remediation. 

The intake verification process also verifies the firmware and hardware. 

There are two dimensions of this verification:

1. Firmware is signed by an approved supplier before it can be installed on an IBM Cloud Virtual Server for VPC system. This helps ensure only approved firmware is applied to the servers. IBM Cloud works with several suppliers to help ensure firmware is signed and components are configured to reject unauthorized firmware.

2. Only firmware that is approved through the IBM Cloud governed specification qualifies for installation. The governed specification is updated cyclically to add newly qualified firmware versions and remove obsolete versions. This type of firmware verification is also performed as part of the server intake process and before any firmware update.

Server configuration is also managed through the governed product specifications. Certain solutions might need custom unified extensible firmware interface (UEFI) configurations, certain features enabled or restrictions put in place. The product specification manages the configurations, which are applied through automation on the servers. Servers are scanned by IBM Cloud’s monitoring and compliance framework at run time.

Specification versioning and promotion


As mentioned earlier, the core components of the IBM Cloud VPC virtual server management process are the product specifications. Product specifications are definition files that contain the configurations for all server profiles maintained and are reviewed by the corresponding IBM Cloud product leader and governance-focused leadership team. Together, they control and manage the server’s approved components, configuration and firmware levels to be applied. The governance-focused leadership team strives for commonality where needed, whereas the product leaders focus on providing value and market differentiation.

It’s important to remember that specifications don’t stand still. These definition files are living documents that evolve as new firmware levels are released or the server hardware grows to support extra vendor devices. Because of this, the IBM Cloud VPC specification process is versioned to capture changes throughout the server’s lifecycle. Each server deployment captures the version of the specification that it was deployed with and provides identification of the intended versus actual state as well.

Promotion of specifications is also necessary. When a specification is updated, it doesn’t necessarily mean it’s immediately effective across the production environments. Instead, it moves through the appropriate development, integration and preproduction (staging) channels before moving to production. Depending on the types of devices or fixes being addressed, there might even be a varying schedule for how quickly the rollout occurs.

How well do you know your hypervisor and firmware?
Figure 1: IBM Cloud VPC specification promotion process

Firmware on IBM Cloud VPC is typically updated in waves. Where possible, it might be updated live, although some updates require downtime. Generally, this is unseen by our customers due to live migration. However, as the firmware updates roll through production, they can take time to move customers around. So, when a specification update is promoted through the pipeline, it then starts the update through the various runtime systems. The velocity of the update is generally dictated by the severity of the change.

How IBM Cloud VPC virtual servers set up a hardware root of trust


IBM Cloud Virtual Servers for VPC include root of trust hardware known as the platform security module. Among other functions, the platform security module hardware is designed to verify the authenticity and integrity of the platform firmware image before the main processor can boot. It verifies the image authenticity and signature using an approved certificate. The platform security module also stores copies of the platform firmware image. If the platform security module finds that the firmware image installed on the host was not signed with the approved certificate, the platform security module replaces it with one of its images before initializing the main processor.

Upon initialization of the main processor and loading of the system firmware, the firmware is then responsible for authenticating the hypervisor’s bootloader as part of a process known as secure boot, which aims to establish the next link in a chain of trust. The firmware verifies that the bootloader was signed using an authorized key before it was loaded. Keys are authorized when their corresponding public counterparts are enrolled in the server’s key database. Once the bootloader is cleared and loaded, it validates the kernel before the latter can run. Finally, the kernel validates all modules before they’re loaded onto the kernel. Any component that fails the validation is rejected, causing the system boot to halt.

The integration of secure boot with the platform security module aims to create a line of defense against the injection of unauthorized software through supply chain attacks or privileged operations on the server. Only approved firmware, bootloaders, kernels and kernel modules signed with IBM Cloud certificates and those of previously approved operating system suppliers can boot on IBM Cloud Virtual Servers for VPC.

The firmware configuration process described above includes the verification of firmware secure boot keys against the list of those initially approved. These consist of boot keys in the authorized keys database, the forbidden keys, the exchange key and the platform key.

Secure boot also includes a provision to enroll additional kernel and kernel module signing keys into the first stage bootloader (shim), also known as the machine owner key (mok). Therefore, IBM Cloud’s operating system configuration process is also designed so that only approved keys are enrolled in the mok facility.

Once a server passes all qualifications and is approved to boot, an audit chain is established that’s rooted in the hardware of the platform security module and extends to modules loaded into the kernel.

How well do you know your hypervisor and firmware?
Figure 2: IBM Cloud VPC secure boot audit chain

How do I use verified hypervisors on IBM Cloud VPC virtual servers?


Good question. Hypervisor verification is on by default for supported IBM Cloud Virtual Servers for VPC. Choose a generation 3 virtual server profile (such as bx3d, cx3d, mx3d or gx3), as shown below, to help ensure your virtual server instances run on hypervisor-verified supported servers. These capabilities are readily available as part of existing offerings and customers can take advantage by deploying virtual servers with a generation 3 server profile.

How well do you know your hypervisor and firmware?
Figure 3: IBM Cloud Virtual Servers for VPC, Generation 3

IBM Cloud continues to evolve its security architecture and enhances it by introducing new features and capabilities to help support our customers.

Source: ibm.com

Thursday, 5 September 2024

When AI chatbots break bad

When AI chatbots break bad

A new challenge has emerged in the rapidly evolving world of artificial intelligence. “AI whisperers” are probing the boundaries of AI ethics by convincing well-behaved chatbots to break their own rules.

Known as prompt injections or “jailbreaks,” these exploits expose vulnerabilities in AI systems and raise concerns about their security. Microsoft recently made waves with its “Skeleton Key” technique, a multi-step process designed to circumvent an AI’s ethical guardrails. But this approach isn’t as novel as it might seem.

“Skeleton Key is unique in that it requires multiple interactions with the AI,” explains Chenta Lee, IBM’s Chief Architect of Threat Intelligence. “Previously, most prompt injection attacks aimed to confuse the AI in one try. Skeleton Key takes multiple shots, which can increase the success rate.”

The art of AI manipulation


The world of AI jailbreaks is diverse and ever-evolving. Some attacks are surprisingly simple, while others involve elaborate scenarios that require the expertise of a sophisticated hacker. What unites them is a common goal: pushing these digital assistants beyond their programmed limits.

These exploits tap into the very nature of language models. AI chatbots are trained to be helpful and to understand context. Jailbreakers create scenarios where the AI believes ignoring its usual ethical guidelines is appropriate.

While multi-step attacks like Skeleton Key grab headlines, Lee argues that single-shot techniques remain a more pressing concern. “It’s easier to use one shot to attack a large language model,” he notes. “Imagine putting a prompt injection in your resume to confuse an AI-powered hiring system. That’s a one-shot attack with no chance for multiple interactions.”

According to cybersecurity experts, the potential consequences are alarming. “Malicious actors could use Skeleton Key to bypass AI safeguards and generate harmful content, spread disinformation or automate social engineering attacks at scale,” warns Stephen Kowski, Field CTO at SlashNext Email Security+.

While many of these attacks remain theoretical, real-world implications are starting to surface. Lee cites an example of researchers convincing a company’s AI-powered virtual agent to offer massive, unauthorized discounts. “You can confuse their virtual agent and get a good discount. That might not be what the company wants,” he says.

In his own research, Lee has developed proofs of concept to show how an LLM can be hypnotized to create vulnerable and malicious code and how live audio conversations can be intercepted and distorted in near real time.

Fortifying the digital frontier


Defending against these attacks is an ongoing challenge. Lee outlines two main approaches: improved AI training and building AI firewalls.

“We want to do better training so the model itself will know, ‘Oh, someone is trying to attack me,'” Lee explains. “We’re also going to inspect all the incoming queries to the language model and detect prompt injections.”

As generative AI becomes more integrated into our daily lives, understanding these vulnerabilities isn’t just a concern for tech experts. It’s increasingly crucial for anyone interacting with AI systems to be aware of their potential weaknesses.

Lee parallels the early days of SQL injection attacks on databases. “It took the industry 5-10 years to make everyone understand that when writing a SQL query, you need to parameterize all the inputs to be immune to injection attacks,” he says. “For AI, we’re beginning to utilize language models everywhere. People need to understand that you can’t just give simple instructions to an AI because that will make your software vulnerable.”

The discovery of jailbreaking methods like Skeleton Key may dilute public trust in AI, potentially slowing the adoption of beneficial AI technologies. According to Narayana Pappu, CEO of Zendata, transparency and independent verification are essential to rebuild confidence.

“AI developers and organizations can strike a balance between creating powerful, versatile language models and ensuring robust safeguards against misuse,” he said. “They can do that via internal system transparency, understanding AI/data supply chain risks and building evaluation tools into each stage of the development process.”

Source: ibm.com

Thursday, 27 June 2024

Top 7 risks to your identity security posture

Top 7 risks to your identity security posture

Detecting and remediating identity misconfigurations and blind spots is critical to an organization’s identity security posture especially as identity has become the new perimeter and a key pillar of an identity fabric. Let’s explore what identity blind spots and misconfigurations are, detail why finding them is essential, and lay out the top seven to avoid.

What are the most critical risks to identity security? Identity misconfigurations and identity blind spots stand out as critical concerns that undermine an organization’s identity security posture.

An identity misconfiguration occurs when identity infrastructure and systems are not configured correctly. This can result from administrative error, or from configuration drift, which is the gradual divergence of an organization’s identity and access controls from their intended state, often due to unsanctioned changes or updates.

Identity blind spots are risks that are overlooked or not monitored by an organization’s existing identity controls, leaving undetected risks that threat actors might exploit.

Why is finding these risks important?


Traditionally, security measures focus on fortifying an organization’s network perimeter by building higher “walls” around its IT resources. However, the network perimeter has become less relevant with the adoption of cloud computing, SaaS services and hybrid work. In this new landscape, full visibility and control of the activities of both human and machine identities is crucial for mitigating cyberthreats.

Both research and real-world incidents where a compromised identity served as the attacker’s initial entry point validate the need to secure identities. The Identity Defined Security Alliance’s most recent research found that 90% of organizations surveyed have experienced at least one identity-based attack in the past year.

Meanwhile, the latest Threat Intelligence Index Report validated what many of us in the industry already knew: Identity has become the leading attack vector. The 2024 report showed a 71% increase in valid identities used in cyberattacks year-over-year. Organizations are just as likely to have a valid identity used in a cyberattack as they are to see a phishing attack. This is despite significant investments in infrastructure security and identity access and management solutions. Hackers don’t hack in; they log in.

One notable recent example of an identity-based attack is the Midnight Blizzard attack disclosed in January 2024. Based on what has been published about the attack, the malicious actors carried out a password spray attack to compromise a legacy nonproduction test tenant account. Once they gained a foothold through a valid account, they used its permissions to access a small percentage of the company’s corporate email user accounts. They might then exfiltrate sensitive information, including emails and attached documents.

What are the top seven risks to an organization’s identity security posture to avoid?


To stay one step ahead of identity-related attacks, identity and security teams should proactively improve their identity security posture by finding and remediating these common identity misconfigurations and blind spots. These are the key risks organizations should take steps to avoid:

Missing multi-factor authentication (MFA)

The US Cybersecurity and Infrastructure Security Agency (CISA) consistently urges organizations to implement MFA for all users and all services to prevent unauthorized access. Yet, achieving this goal can prove challenging in the real world. The complexity lies in configuring multiple identity systems, such as an organization’s Identity Provider and MFA system. Along with hundreds of applications’ settings to enforce MFA for thousands of users and groups. When not configured correctly, it can lead to a scenario where MFA is not enforced due to accidental omission or gaps in session management.

Password hygiene

Effective password hygiene is crucial to an organization’s identity security posture, but common identity misconfigurations frequently undermine password quality and increase the risk of data breaches. Allowing weak or commonly used passwords facilitates unauthorized access through simple guessing or brute force attacks.

Strong but default passwords can make password spray attacks easier. Using outdated password hash algorithms like SHA-1, MD4, MD5, RC2 or RC4, which can be quickly decoded, further exposes user credentials. Also, inadequate salting of passwords weakens their defense against dictionary and rainbow table attacks, making them easier to compromise.

Bypass of critical identity and security systems

Organizations deploy Privileged Access Management (PAM) systems to control and monitor access to privileged accounts, such as domain administrator and admin-level application accounts. PAM systems provide an extra layer of security by storing the credentials to privileged accounts in a secure vault and brokering access to protected systems via a proxy server or bastion host.

Unfortunately, PAM controls can be bypassed by resourceful admins or threat actors if not configured correctly, significantly reducing the protection they should provide. A similar problem can occur when users bypass zero trust network access (ZTNA) systems due to initial configuration issues or configuration drift over time.

Shadow access

Shadow access is a common blind spot in an organization’s identity security posture that can be difficult for organizations to discover and correct. Shadow access is when a user retains unmanaged access via a local account to an application or service for convenience or to speed up troubleshooting. Local accounts typically rely on static credentials, lack proper documentation and are at higher risk of unauthorized access. A local account with high privileges such as a super admin account is especially problematic.

Shadow assets

Shadow assets are a subset of shadow IT and represent a significant blind spot in identity security. Shadow assSets are applications or services within the network that are “unknown” to Active Directory or any other Identity Provider. This means that their existence and access are not documented or controlled by an organization’s identity systems, and these assets are only accessed by local accounts. Without integration into Active Directory or any other Identity Provider, these assets do not adhere to an organization’s established authentication and authorization frameworks. This makes enforcing security measures such as access controls, user authentication and compliance checks challenging. Therefore, shadow assets can inadvertently become gateways for unauthorized access.

Shadow identity systems

Shadow identity systems are unauthorized identity systems that might fall under shadow assets but are called out separately given the risk they pose to an organization’s identity security posture. The most common shadow identity system is the use of unapproved password managers.

Given the scope of their role, software development teams can take things further by implementing unsanctioned secret management tools to secure application credentials and even standing up their own Identity Providers. Another risky behavior is when developers duplicate Active Directory for testing or migration purposes but neglect proper disposal, exposing sensitive employee information, group policies and password hashes.

Forgotten service accounts

A service account is a type of machine identity that can perform various actions depending on its permissions. This might include running applications, automating services, managing virtual machine instances, making authorized API calls and accessing resources. When service accounts are no longer in active use but remain unmonitored with permissions intact, they become prime targets for exploitation. Attackers can use these forgotten service accounts to gain unauthorized access, potentially leading to data breaches, service disruptions and compromised systems, all under the radar of traditional identity security measures.

Adopt identity security posture management (ISPM) to reduce risk


Identity and access management (IAM) systems such as Active Directory, Identity Providers and PAM typically offer limited capabilities to find identity misconfigurations and blind spots that lead to a poor identity security posture. These identity security solutions typically don’t collect the necessary telemetry to identify these issues. This requires collecting and correlating data from multiple sources, including identity system log data, network traffic, cloud traffic and remote access logs.

That is why identity and security teams implement ISPM solutions such as IBM® Verify Identity Protection to discover and remediate identity exposures before an attacker can exploit them. IBM can help protect all your identities and identity fabric by using logs already in your security information and event management (SIEM) solutions or deploying IBM Verify Identity Protection sensors. IBM delivers fast time to value with unmatched visibility into identity activities in the first hours after deployment.

Source: ibm.com

Friday, 17 May 2024

Enhancing data security and compliance in the XaaS Era

Enhancing data security and compliance in the XaaS Era

Recent research from IDC found that 85% of CEOs who were surveyed cited digital capabilities as strategic differentiators that are crucial to accelerating revenue growth. However, IT decision makers remain concerned about the risks associated with their digital infrastructure and the impact they might have on business outcomes, with data breaches and security concerns being the biggest threats.

With the rapid growth of XaaS consumption models and the integration of AI and data at the forefront of every business plan, we believe that protecting data security is pivotal to success. It can also help clients simplify their data compliance requirements as organizations to fuel their AI and data-intensive workloads.

Automation for efficiency and security 


Data is central to all AI applications. The ability to access and process the necessary data yields optimal results from AI models. IBM® remains committed to working diligently with partners and clients to introduce a set of automation blueprints called deployable architectures. 

These blueprints are designed to streamline the deployment process for customers. We aim to allow organizations to effortlessly select and deploy their cloud workloads in a way that is tailor-made to align with preset, reviewable security requirements and to help to enable a seamless integration of AI and XaaS. This commitment to the fusion of AI and XaaS is further exemplified by our recent accomplishment this past year. This platform is designed to enable enterprises to effectively train, validate, fine-tune and deploy AI models while scaling workloads and building responsible data and AI workflows. 

Protecting data in multicloud environments 


Business leaders need to take note of the importance of hybrid cloud support, while acknowledging the reality that modern enterprises often require a mix of cloud and on-premises environments to support their data storage and applications. The fact is that different workloads have different needs to operate efficiently.

This means that you cannot have all your workloads in one place, whether it’s on premises, in public or private cloud or at the edge. One example is our work with CrushBank. The institution uses watsonx to streamline desk operations with AI by arming its IT staff with improved information. This has led to improved productivity and , which ultimately enhances the customer experience. A custom hybrid cloud strategy manages security, data latency and performance, so your people can get out of the business of IT and into their business. 

This all begins with building a hybrid cloud XaaS environment by increasing your data protection capabilities to support the privacy and security of application data, without the need to modify the application itself. At IBM, security and compliance is at the heart of everything we do.

We recently expanded the IBM Cloud Security and Compliance Center, a suite of modernized cloud security and compliance solutions designed to help enterprises mitigate risk and protect data across their hybrid, multicloud environments and workloads. In this XaaS era, where data is the lifeblood of digital transformation, investing in robust data protection is paramount for success. 

XaaS calls for strong data security


IBM continues to demonstrate its dedication to meeting the highest standards of security in an increasingly interconnected and data-dependent world. We can help support mission-critical workloads because our software, infrastructure and services offerings are designed to support our clients as they address their evolving security and data compliance requirements. Amidst the rise of XaaS and AI, prioritizing data security can help you protect your customers’ sensitive information. 

Source: ibm.com

Saturday, 11 May 2024

Empowering security excellence: The dynamic partnership between FreeDivision and IBM

Empowering security excellence: The dynamic partnership between FreeDivision and IBM

In the ever-evolving landscape of cybersecurity, businesses are constantly seeking robust solutions to fortify their defenses and navigate the complex challenges posed by cyberthreats. FreeDivision, an IBM Business Partner, stands out in the field by understanding the local needs of its clients. Operating as a security service partner, FreeDivision leverages IBM’s endpoint detection and response (EDR) solution, IBM Security® QRadar® EDR, as part of its solution, freedivision.io, to address the unique security concerns of its clients.  

Clients look to FreeDivision for help in two key areas: Security audit and consultation, and incident response and recovery.  

Security audit and consultation


Many companies still underestimate the depth of security required, often relying solely on antivirus solutions. FreeDivision’s products and expertise distinctly stand out when conducting comprehensive security checks for clients. Its solution not only protects against threats, but also acts as a vigilant hunting tool.  

Through in-depth analysis of logs, FreeDivision guides clients toward fortified security postures, minimizing the risk of ransomware and other cyberthreats. Protection against ransomware is provided by IBM QRadar EDR’s adaptive system. Due to QRadar EDR, FreeDivision can resolve security incidents in seconds. The response procedures use artificial intelligence to prevent human error and enable rapid response to threats. 

“IBM Security QRadar EDR is like a powerful EDR–the built-in AI and automation make it fast, efficient and easy to use.”  —Sandro Huber, Chief Information Officer and Co-owner of FreeDivision

Incident response and recovery


For clients who seek assistance after falling victim to attacks, FreeDivision steps in to remediate the situation and fortify defenses. By engaging with clients who experienced a ransomware attack, FreeDivision not only resolves immediate threats, but also collaborates with them to establish resilient security measures for the future. To prepare its clients for any future attacks, FreeDivision has embedded IBM QRadar EDR to detect and block new and unknown threats, from ransomware to sophisticated file attacks, to memory-only attacks.  

A ransomware hacker attack 


When PeHtoo, a Czech manufacturer, was attacked by a ransomware hacker, it engaged FreeDivision to help it recover. Ivan Eminger, CEO of PeHtoo, tells the story: 

“By our standards, we have invested considerable resources in IT operations and security. Unfortunately, it turns out that this alone was not enough. We were attacked by a ransomware hacker. Our data was completely stolen and then encrypted. We had to start rebuilding the company from scratch. 

Fortunately, FreeDivision experts helped us set up new IT processes and security standards. Thanks to its MDR services, we have a constant overview of all user processes started in our company infrastructure and any deviation from normal user behavior is immediately addressed in an isolated environment outside of production operations. Combined with network security and a next-generation security gateway at the perimeter, we are now much better prepared to counter existing threats, allowing us to focus on the core activities of our business with greater peace of mind.” 

Why partner with IBM 


The choice of FreeDivision to build its solutions with IBM is rooted in the exceptional capabilities and support offered by IBM Security QRadar EDR, formerly known as ReaQta. It remediates known and unknown endpoint threats in near real-time with easy-to-use intelligent automation that requires little-to-no human interaction. You can make quick and informed decisions with attack visualization storyboards and leverage automated alert management and advanced continuous learning AI capabilities. 

FreeDivision shared 3 key features that made its decision to choose QRadar EDR easy: 

  • The console is intuitive for users. It’s customizable, and easy to use. 
  • The support from IBM is unparalleled. IBM not only delivers a solution but also stands by it when challenges arise. 
  • Its customers appreciate the depth of their investigation tools. 

“Our strength lies in the perfect blend of IBM’s global stature and our localized insights,” says Sandro Huber, Chief Information Officer and Co-owner of FreeDivision. “It’s the combination of IBM’s cutting-edge technology and our deep understanding of what matters in the local market. It’s a dynamic relationship built on trust, expertise, and a shared commitment to elevating cybersecurity standards.” 

Source: ibm.com

Thursday, 9 May 2024

Simplifying IAM through orchestration

Simplifying IAM through orchestration

The recent validated what many of us in the industry already knew: Identity has become the leading attack vector. The 2024 report showed a 71% increase in valid identities used in cyberattacks year-over-year. What really puts it into perspective is the realization that you are just as likely to have your valid identity used in a cyberattack as you are to see a phishing attack in your organization. Hackers don’t hack in; they log in.

The risk of valid identities being used as the entry point by bad actors is expected to continue with the ever-increasing applications and systems being added in today’s hybrid environments. We are finding an overwhelming majority of organizations are choosing to use different identity vendors that offer the best capability for each use case, instead of consolidating with one vendor. The use of various identity tools is further compounded with managing access to your legacy application infrastructure, integrating new users during mergers and acquisitions. The hybrid reality has also led to an inconsistent user experience for your workers, partners and customers, an increased risk of identity-based attacks, and added an additional burden on your admins. 

To solve the identity challenges created by today’s hybrid environments, businesses need a versatile solution that complements existing identity solutions while effectively integrating various identity and access management (IAM) silos into a cohesive whole. Solutions that help create a consistent user experience for your workers, partners and customers across all applications and systems. Organizations and industry analysts refer to this connected IAM infrastructure as an Identity fabric. Organizations have begun to move toward connecting multiple IAM solutions through a common identity fabric.

Securing the digital journey


To protect the integrity of digital user journeys, organizations use a range of tools spanning bot mitigation, identity verification and affirmation, user authentication, authorization, fraud detection and adjacent capabilities such as risk analytics and access management. Building and maintaining these integrations is complex and carries an operational overhead regarding time and resources. These various tools don’t easily interconnect and don’t generate standardized types of signals. As a result, the interpretation of the varied risk signals is siloed across different events along the digital user journey. This lack of an integrated approach to managing risk along the digital user journey hinders the adoption of continuous adaptive trust principles and adds undue risk into the system. Various, disconnected identity tools prohibit you from creating that consistent user experience and security controls. Orchestration solutions improve the efficacy and efficiency of risk management along digital user journeys.

Identity orchestration


Identity and access management projects are complex enough with many taking 12-18 months. They require skilled staff to solve today’s identity challenges such as integrating IAM silos together and modernizing access to legacy applications. Many of the solutions out there are not helpful and actually create more vendor lock-in. What is really needed is an open integration ecosystem that allows for flexibility and integrations that are simple and require fewer skills to accomplish. This is where an identity fabric and identity orchestration come into play. Orchestration is the critical component and the integration glue for an identity fabric. Without it, building an identity fabric would be resource-intensive and costly. Orchestration allows more intelligent decision-making and simplifies everything from onboarding to offboarding and enables you to build consistent security policies. Identity orchestration takes the burden off your administrators by quickly and easily automating processes at scale. This enables consistent, frictionless user experiences, while improving identity risk posture, and helping you avoid vendor lock-in. 

Benefits of identity orchestration


Design consistent, frictionless user experiences

Identity orchestration enables you to streamline consistent and frictionless experiences for your workers, partners and customers across the entire identity lifecycle. From account creation to login to passwordless authentication using passkeys to account management, makes it easy to orchestrate identity journeys across your identity stack, facilitating a frictionless experience. IBM’s identity orchestration flow designer enables you to build consistent, secure authentication journeys for users regardless of the application. These journeys can be built effortlessly with low-code, no-code orchestration engines to simplify administrative burden.

Fraud and risk protection

Orchestration allows you to combine fraud signals, decisions and mitigation controls, such as various types of authenticators and identity verification technologies. You can clearly define how trusted individuals are granted access and how untrusted users are mitigated with security authentication. This approach overlays a consistent and continuous overlaying risk and fraud context across identity journey. IBM Security® Verify orchestration allows you to bring together fraud and risk signals to detect threats. It also provides native, modern and strong phishing-resistant risk-based authentication to all applications, including legacy apps, with drag-and-drop work-flows.

Avoid vendor lock-in with identity-agnostic modernization

Organizations have invested in many existing tools and assets across their IAM stack. This can range from existing directories to legacy applications to existing fraud signals, to name a few. IBM Security Verify identity orchestration enables organizations to bring their existing tools to apply consistent, continuous and contextual orchestration across all identity journeys.It enables you to easily consolidate and unify directories, modernize legacy applications and streamline third-party integration for multifactor authentication (MFA), and risk and notification systems

Leverage IBM Security Verify


IBM Security Verify simplifies IAM with orchestration to reduce complexity, improves your identity risk posture, and simplifies the user journey by enabling you to easily integrate multiple identity system providers (IdPs) across hybrid environments through low-code or no-code experiences.

IBM provides identity-agnostic modernization tools enabling you to manage, migrate and enforce consistent identity security from one IAM solution to another while complementing your existing identity tools. By consolidating user journeys and policies, you can maintain security consistency across all systems and applications, creating frictionless user experiences and security controls across your entire identity landscape.

Source: ibm.com

Saturday, 6 January 2024

A brief history of cryptography: Sending secret messages throughout time

A brief history of cryptography: Sending secret messages throughout time

Derived from the Greek words for “hidden writing,” cryptography is the science of obscuring transmitted information so that only the intended recipient can interpret it. Since the days of antiquity, the practice of sending secret messages has been common across almost all major civilizations. In modern times, cryptography has become a critical lynchpin of cybersecurity. From securing everyday personal messages and the authentication of digital signatures to protecting payment information for online shopping and even guarding top-secret government data and communications—cryptography makes digital privacy possible.

While the practice dates back thousands of years, the use of cryptography and the broader field of cryptanalysis are still considered relatively young, having made tremendous advancements in only the last 100 years. Coinciding with the invention of modern computing in the 19th century, the dawn of the digital age also heralded the birth of modern cryptography. As a critical means of establishing digital trust, mathematicians, computer scientists and cryptographers began developing modern cryptographic techniques and cryptosystems to protect critical user data from hackers, cybercriminals, and prying eyes. 

Most cryptosystems begin with an unencrypted message known as plaintext, which is then encrypted into an indecipherable code known as ciphertext using one or more encryption keys. This ciphertext is then transmitted to a recipient. If the ciphertext is intercepted and the encryption algorithm is strong, the ciphertext will be useless to any unauthorized eavesdroppers because they won’t be able to break the code. The intended recipient, however, will easily be able to decipher the text, assuming they have the correct decryption key.  

In this article, we’ll look back at the history and evolution of cryptography.

Ancient cryptography


1900 BC: One of the first implementations of cryptography was found in the use of non-standard hieroglyphs carved into the wall of a tomb from the Old Kingdom of Egypt. 

1500 BC: Clay tablets found in Mesopotamia contained enciphered writing believed to be secret recipes for ceramic glazes—what might be considered to be trade secrets in today’s parlance. 

650 BC: Ancient Spartans used an early transposition cipher to scramble the order of the letters in their military communications. The process works by writing a message on a piece of leather wrapped around a hexagonal staff of wood known as a scytale. When the strip is wound around a correctly sized scytale, the letters line up to form a coherent message; however, when the strip is unwound, the message is reduced to ciphertext. In the scytale system, the specific size of the scytale can be thought of as a private key. 

100-44 BC: To share secure communications within the Roman army, Julius Caesar is credited for using what has come to be called the Caesar Cipher, a substitution cipher wherein each letter of the plaintext is replaced by a different letter determined by moving a set number of letters either forward or backward within the Latin alphabet. In this symmetric key cryptosystem, the specific steps and direction of the letter transposition is the private key.

Medieval cryptography


800: Arab mathematician Al-Kindi invented the frequency analysis technique for cipher breaking, representing one of the most monumental breakthroughs in cryptanalysis. Frequency analysis uses linguistic data—such as the frequency of certain letters or letter pairings, parts of speech and sentence construction—to reverse engineer private decryption keys. Frequency analysis techniques can be used to expedite brute-force attacks in which codebreakers attempt to methodically decrypt encoded messages by systematically applying potential keys in hopes of eventually finding the correct one. Monoalphabetic substitution ciphers that use only one alphabet are particularly susceptible to frequency analysis, especially if the private key is short and weak. Al-Kandi’s writings also covered cryptanalysis techniques for polyalphabetic ciphers, which replace plaintext with ciphertext from multiple alphabets for an added layer of security far less vulnerable to frequency analysis. 

1467: Considered the father of modern cryptography, Leon Battista Alberti’s work most clearly explored the use of ciphers incorporating multiple alphabets, known as polyphonic cryptosystems, as the middle age’s strongest form of encryption. 

1500: Although actually published by Giovan Battista Bellaso, the Vigenère Cipher was misattributed to French cryptologist Blaise de Vigenère and is considered the landmark polyphonic cipher of the 16th century. While Vigenère did not invent the Vigenère Cipher, he did create a stronger autokey cipher in 1586. 

Modern cryptography 


1913: The outbreak of World War I at the beginning of the 20th century saw a steep increase in both cryptology for military communications, as well as cryptanalysis for codebreaking. The success of English cryptologists in deciphering German telegram codes led to pivotal victories for the Royal Navy.

1917: American Edward Hebern created the first cryptography rotor machine by combining electrical circuitry with mechanical typewriter parts to automatically scramble messages. Users could type a plaintext message into a standard typewriter keyboard and the machine would automatically create a substitution cipher, replacing each letter with a randomized new letter to output ciphertext. The ciphertext could in turn be decoded by manually reversing the circuit rotor and then typing the ciphertext back into the Hebern Rotor Machine, producing the original plaintext message.

1918: In the aftermath of war, German cryptologist Arthur Scherbius developed the Enigma Machine, an advanced version of Hebern’s rotor machine, which also used rotor circuits to both encode plaintext and decode ciphertext. Used heavily by the Germans before and during WWII, the Enigma Machine was considered suitable for the highest level of top-secret cryptography. However, like Hebern’s Rotor Machine, decoding a message encrypted with the Enigma Machine required the advanced sharing of machine calibration settings and private keys that were susceptible to espionage and eventually led to the Enigma’s downfall.

1939-45: At the outbreak of World War II, Polish codebreakers fled Poland and joined many notable and famous British mathematicians—including the father of modern computing, Alan Turing—to crack the German Enigma cryptosystem, a critical breakthrough for the Allied Forces. Turing’s work specifically established much of the foundational theory for algorithmic computations. 

1975: Researchers working on block ciphers at IBM developed the Data Encryption Standard (DES)—the first cryptosystem certified by the National Institute for Standards and Technology (then known as the National Bureau of Standards) for use by the US Government. While the DES was strong enough to stymie even the strongest computers of the 1970s, its short key length makes it insecure for modern applications, but its architecture was and is highly influential in the advancement of cryptography.

1976: Researchers Whitfield Hellman and Martin Diffie introduced the Diffie-Hellman key exchange method for securely sharing cryptographic keys. This enabled a new form of encryption called asymmetric key algorithms. These types of algorithms, also known as public key cryptography, offer an even higher level of privacy by no longer relying on a shared private key. In public key cryptosystems, each user has their own private secret key which works in tandem with a shared public for added security.

1977: Ron Rivest, Adi Shamir and Leonard Adleman introduce the RSA public key cryptosystem, one of the oldest encryption techniques for secure data transmission still in use today. RSA public keys are created by multiplying large prime numbers, which are prohibitively difficult for even the most powerful computers to factor without prior knowledge of the private key used to create the public key.

2001: Responding to advancements in computing power, the DES was replaced by the more robust Advanced Encryption Standard (AES) encryption algorithm. Similar to the DES, the AES is also a symmetric cryptosystem, however, it uses a much longer encryption key that cannot be cracked by modern hardware.

Quantum cryptography, post-quantum cryptography and the future of encryption


The field of cryptography continues to evolve to keep pace with advancing technology and increasingly more sophisticated cyberattacks. Quantum cryptography (also known as quantum encryption) refers to the applied science of securely encrypting and transmitting data based on the naturally occurring and immutable laws of quantum mechanics for use in cybersecurity. While still in its early stages, quantum encryption has the potential to be far more secure than previous types of cryptographic algorithms, and, theoretically, even unhackable. 

Not to be confused with quantum cryptography which relies on the natural laws of physics to produce secure cryptosystems, post-quantum cryptographic (PQC) algorithms use different types of mathematical cryptography to create quantum computer-proof encryption.

According to the National Institute of Standards and Technology (NIST) (link resides outside ibm.com), the goal of post-quantum cryptography (also called quantum-resistant or quantum-safe) is to “develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks.”

Source: ibm.com

Saturday, 9 December 2023

How to build a successful risk mitigation strategy

How to build a successful risk mitigation strategy

As Benjamin Franklin once said, “If you fail to plan, you are planning to fail.” This same sentiment can be true when it comes to a successful risk mitigation plan. The only way for effective risk reduction is for an organization to use a step-by-step risk mitigation strategy to sort and manage risk, ensuring the organization has a business continuity plan in place for unexpected events.

Building a strong risk mitigation strategy can set up an organization to have a strong response in the face of risk. This ultimately can reduce the negative effects of threats to the business, such as cyberattacks, natural disasters and other vulnerabilities the business operations may face.

What is risk mitigation?


Risk mitigation is the practice of putting an action plan in place to reduce the impact or eliminate risks an organization might face. Once that plan has been developed and executed by the organization, it’s up to them to continue to monitor progress and make changes as the business grows and evolves over time. It’s important to hit every aspect of the supply chain and address risk throughout the entire business.

Types of risk


While risks will vary greatly from one industry to the next, there are a few commonly identified risks worth noting.

Compliance risk: When an organization violates rules both internal and external, putting its reputation or finances at risk.                   

Legal risk: This is a compliance risk that involves the organization breaking government rules, resulting in a risk of financial and reputational loss.

Operational risk: This is when there is a risk of loss from the organization’s normal daily business due to failed or flawed processes.

5 steps to a successful risk mitigation strategy


There are several tactics and techniques an organization could take to make a risk mitigation plan. Organizations need to be cautious, however, not to copy from another organization. In most cases, a business has unique needs and must make its own risk mitigation plan in order to be successful.

It’s important to take the time to build a strong risk mitigation team to strategize and put together a plan that works. This risk mitigation plan should weigh the impact of each risk and prioritize the risks based on severity. While plans will vary by necessity, here are five key steps to building a successful risk mitigation strategy:

Step 1: Identify

The first step in any risk mitigation plan is risk identification. The best approach for this first step is to heavily document each of the risks and continue the documentation throughout the risk mitigation process.

Bring in stakeholders from all aspects of the business to provide input and have a project management team in place. You want as many perspectives as possible when it comes to laying out risks and finding as many as possible.

It’s important to remember that all team members in the organization matter; taking them into consideration when identifying potential risks is vital.

Step 2: Perform a risk assessment

The next step is to quantify the level of risk for each risk identified during the first step. This is a key part of the risk mitigation plan since this step lays the groundwork for the entire plan.

In the assessment phase you will measure each risk against one another and analyze the occurrence of each risk. You will also analyze the degree of negative impact the organization would face if the risk were to occur for risks such as cybersecurity or operational risks.

Step 3: Prioritize

The risks have been identified and analyzed. Now it’s time to rank the risks based on severity. The level of severity should have been figured out in the previous step.

Part of prioritization might mean accepting an amount of risk in one part of an organization to protect another part. This tradeoff is likely to happen if your organization has multiple risks across different areas and establishes an acceptable level of risk.

Once an organization establishes this threshold, it can prepare the resources necessary for business continuity across the organization and implement the risk mitigation plan.

Step 4: Monitor

The groundwork has been laid and now it’s time to execute. By this stage a detailed risk mitigation and management plan should be in place. The only thing left to do is to let the risks play out and monitor them continuously.

An organization is always changing and so are business needs; therefore, it’s important that an organization has strong metrics for tracking over time each risk, its category and the corresponding mitigation strategy.

A good practice might be setting up a weekly meeting time to discuss the risks or to use a statistics tool for tracking any changes in the risk profile.

Step 5: Report

The last step of the risk mitigation strategy is to implement the plan in place and then reevaluate it, based on monitoring and metrics, for efficacy. There is a constant need to assess and change it when it seems fit.

Analyzing the risk mitigation strategy is crucial to ensure it is up-to-date, adhering to the latest regulatory and compliance rules, and functioning appropriately for the business. Contingency plans should be in place if something drastic changes or risk events occur.

Types of risk mitigation strategies


The risk mitigation strategies listed below are used most often and commonly in tandem, depending on the business risks and potential impact on the organization.

Risk acceptance: This strategy involves accepting the possibility of a reward outweighing the risk. It doesn’t have to be permanent, but for a given period it may be the best strategy to prioritize more severe risks and threats.

Risk avoidance: The risk avoidance strategy is a method for mitigating possible risk by taking measures to avoid the risk from occurring. This approach may require the organization to compromise other resources or strategies.

Risk monitoring: This approach would occur after an organization has completed its risk mitigation analysis and decided to take steps to reduce the chances of a risk happening or the impact it would have if it did occur. It doesn’t eliminate the risk; rather, it accepts the risk, focuses on containing losses and does what it can to prevent it from spreading.

Risk transfer: Risk transfer involves passing the risk to a third party. This strategy shifts the risk from the organization onto another party; in many cases, the risk shifts to an insurance company. An example of this is obtaining an insurance policy to cover property damage or personal injury.

Risk mitigation and IBM


Business faces many challenges today, including combating financial crime and fraud, controlling financial risk, and mitigating risks in technology and business operations. You must develop and implement successful risk management strategies while enhancing your programs for conducting risk assessments, meeting regulations and achieving compliance.

We deliver services that combine integrated technology from IBM with deep regulatory expertise and managed services from Promontory, an IBM company. By using scalable operations and intelligent workflows, IBM helps clients achieve priorities, manage risk, fight financial crime and fraud, and meet changing customer demands while satisfying supervisory requirements.

Source: ibm.com

Saturday, 2 December 2023

Supercharge security operations: How to unlock analysts’ productivity

Supercharge security operations: How to unlock analysts’ productivity

Security analysts are all too familiar with the challenges of alert fatigue, swivel chair type of analysis, and “ghost chasing” spurred by false positives. Facing massive volumes of data coming from an expanding digital footprint and attack surfaces across hybrid multi-cloud environments, they must quickly discern real threats from all the noise without getting derailed by stale intelligence.


Many organizations have to juggle dozens of security tools, which creates scattered, contextless information that often weakens the foundational triad of cybersecurity: tools, processes and people. To help manage these inefficiencies that can delay crucial threat responses, security operations teams need to explore how to embrace AI and automation.

A day in the SOC 


A SOC analyst’s day often includes dealing with limited visibility due to expanding attack surfaces and responding to contextless alerts, which are challenging to decipher. As a result, they frequently spend up to one-third of their day investigating false positives.1 This not only impacts their productivity but also hinders their ability to address about half of the daily alerts, which might be indicators of an actual attack.

The biggest challenges faced by SOC analysts today include:

  • Poor visibility: Per The State of Attack Management 2022 report, attack surfaces increased attack surfaces for two out of three organizations in 2022.
  • Alert fatigue and disconnected tools: According the same attack surface management report, 80% of organizations use 10 or more tools (e.g. EDR, EPP, NDRs, SIEM, threat intelligence, web traffic, email filtering, system, network and application logs, cloud logs, IAM tools, etc.).
  • Keeping up with cyberattacks: IBM’s Cost of a Data Breach report found that 51% of organizations struggle to detect and respond to advanced threats.
  • Outdated tools and manual methods: The same data breach report also shows that 32% of organizations lack security automation and orchestration.
  • Lack of standardization to fight organized cybercrime globally: The X-Force Threat Intelligence Index reveals signs of increased collaboration between cybercriminal groups.

Adding to these major challenges are other usual suspects such as, increasing complexity, limited resources with increasing cost, and talent shortage (a.k.a skills gap).

As first responders, how SOC analysts prioritize, triage and investigate alerts and signs of suspicious activity defines the fate of attacks and the impact on the organization. When SOC analysts get slowed down by these challenges, it creates a growing defense deficit and breach window, which can expose organization to higher risks.

Finding the needle

Threats hide in complexity and noise and thrive with the inability to keep up with the acceleration of attacks. Attacks can occur in minutes or seconds, while analysts, consumed by manual tasks operate in hours or days. This disparity in speed is a real risk in itself.

Without comprehensive visibility, intelligent risk prioritization, effective detection, proactive threat hunting, and skills building, SOC analysts cannot improve their workflows and evolve with the threat landscape, perpetuating a vicious cycle.

Increasing the security analyst’s productivity is fundamental to scaling cybersecurity in a rapidly evolving threat landscape. After hearing customers and security professionals talk about their core challenges, this efficiency became the goal and IBM designed a purpose-built solution to deliver what is the required to unlock analysts’ productivity.

Investigating and responding fast


QRadar Log Insights provides a simplified and unified analyst experience (UAX) that enables your security operations team to search and perform analytics, automatically investigate incidents and take recommended actions using all security-related data, regardless the location or the type of the data source. 

Supercharge security operations: How to unlock analysts’ productivity

With QRadar Log Insights’ UAX, you get:

◉ AI-based risk prioritization: As data flows in, logs and alerts are automatically checked against security rules and indicators of compromise (IoC) from threat intelligence sources. After being enriched with business context, they’re processed by a self-learning engine that’s informed by past analyst actions. This engine identifies high fidelity findings and filters out false positives. AI-based risk scoring is then applied. Although the analyst didn’t have to do anything, all the steps and information about the events, threat intelligence and applied score is available for analysis.

Supercharge security operations: How to unlock analysts’ productivity

◉ Automated investigation: A case is automatically created for incidents above a risk threshold calculated using a combined score from correlated events. Events in a case are arranged on a timeline for a quick view of attack steps. All identified artifacts are collected as evidence, such as IoCs, IP and DNS addresses, host name, user IDs, vulnerability CVEs, etc. Additionally, findings continue to be correlated with artifacts collected on a sliding time window providing continuous monitoring into the future.

Supercharge security operations: How to unlock analysts’ productivity

◉ Recommended actions: Based on the identified artifacts and techniques from the attack, Log Insights suggests pointed mitigation actions, ensuring a quick response and speedy containment.

Supercharge security operations: How to unlock analysts’ productivity

◉ Case management: Integrated case management streamlines collaboration and tracks progression toward resolution. Every piece of evidence is collected, appropriate action is recommended and those taken by peers are recorded.

Supercharge security operations: How to unlock analysts’ productivity

◉ Insightful attack visualization: A comprehensive graphical visualization illustrates the attack path, highlighting the sequence and mapping attack stages to the impacted resources—known as the blast radius. This visualization empowers SOC analysts to gauge the impact, understand potential persistence techniques, and identify what areas are most important to address first.

Supercharge security operations: How to unlock analysts’ productivity

Attack steps are also mapped to MITRE TTPs, offering detailed insights into adversarial actions and progress:

Supercharge security operations: How to unlock analysts’ productivity

◉ Federated search: A high-performance search engine empowers threat hunting across all your data sources. From a single screen with a single query, search data from your security tools EDRs, SIEMs, NDRs, Log Mgt, Cloud, email security, etc. This capability enables extended investigations into third-party sources, on-prem and in other clouds, accommodating data not yet ingested into Log Insights. You can simultaneously query both the data within Log Insights and multiple external data sources, all included for no additional cost.

Supercharge security operations: How to unlock analysts’ productivity

◉ Integrated threat intelligence: X-Force and community-sourced threat intelligence are continuously updated, autonomously tracking threat activities. This dynamic system keeps up with previously unseen threats enhancing detection capabilities. 

Supercharge security operations: How to unlock analysts’ productivity

UAX integrated suite of capabilities powered by AI and automation, streamlines risk prioritization, threat investigation and visualization, federated searching, and case management, enabling analysts to handle incidents with remarkable speed and efficiency.

Supercharge security operations: How to unlock analysts’ productivity

Unlock analysts’ productivity with QRadar Log Insights


Disjointed information and fragmented workflows can significantly extend the amount of time security analysts spend on investigating and acting on security events. In cybersecurity, how your security team spends their time can mean the difference between simply analyzing a security event and dealing with a full-blown data breach incident. Every second counts.

To cope with the rising tide of data and alerts, organizations must transcend the limitations of manual processes. By integrating artificial intelligence and automation into their workflows, analysts are better equipped to keep pace with and respond to the rapidly intensifying landscape of cyber threats.

Unlock analyst’s productivity with a modern log management and security observability platform.

Saturday, 31 December 2022

A catalyst for security transformation: Modern security for hybrid cloud

IBM, IBM Exam Study, IBM Exam Prep, IBM Tutorial and Material, IBM Certification, IBM Guides, IBM Career, IBM Skills, IBM Jobs

Today it takes an average of 252 days for an organization to identify and contain a breach across hybrid cloud environments, while ransomware attacks occur every 11 seconds. This proves that traditional security can no longer keep up with our modern world. As most big businesses move to be multicloud, SaaS-heavy hybrid cloud users, enterprises must raise cyber awareness to protect a dramatically expanded security attack surface.

Security is no longer an afterthought and must be embedded in everything we do. In the increased complex hybrid cloud environment, how do we secure end-to-end and obtain a holistic security posture that is adequate to support business functions? It’s time to think of security at the enterprise level as industries shift to a new, Security First archetype: Transformative Security Programs.

Modernize security: Quality, velocity, affordability


80% or more of executives struggle to engage information security and operations disciplines early enough to prevent rework or security incidents. To incorporate a Security First mindset, companies should consider policy compliance, security regulations and asset protection before they design their cloud strategy. In an effort to prevent costly reworks, companies should also address complexities early on in the strategy and design phase, rather than waiting to deal with security later.

A modernized security operation and management system should avoid the antiquated approach of security as a stand-alone function. Instead, run it as a true integral business entity and invest accordingly to drive cyber resiliency and the quality, velocity and affordability needed to protect digital assets. With a Security First approach, not only will your vulnerabilities be subsidized through secure architecture design and early, modern security testing, but your enterprise can also leverage automation, artificial intelligence (AI) and machine learning (ML) to shorten MTTR and supplement cyber talent shortages.

Hybrid cloud mastery demands a whole-team approach to security


With 82% of security breaches caused by human error, a modern security program should include situational awareness with a single pane of glass and advanced cyber training such as simulated cybersecurity attack and response exercises. These training designs incorporate the intensity of countering attacks with fun factors to best educate and relate security to your team’s day-to-day activities. Modern security awareness and education encourages people to exercise critical thinking and promote good cyber behavior for normal operations as well as disrupted, under-attack operations.

Though improving cybersecurity and reducing security risks are critical for the successful execution of digital initiatives in cloud portfolios, they’re not always directly linked in execution. Rather than merely running a security modernization program in parallel with a cloud adoption program, aim to explicitly integrate roadmaps and embed security into the hybrid cloud journey—with enterprise security and hybrid cloud security playing on the same team.

As an example, no matter who is leading a data fabric initiative, designing and implementing a secure data fabric requires the engagement of the whole team. Engaging the whole team means security becomes an explicitly shared responsibility, and this approach is easier and more effective when it’s grounded in a broader Security First and Security Always culture.

3 steps for overcoming the security challenge to hybrid cloud mastery


Step 1: Harmonize the security posture across the estate

Think holistically. Security posture is the sum of security policies, capabilities, and procedures across the various components of a hybrid cloud estate. When we push the “start” button and ask the specific cloud or components to interoperate in a productive way, the lack of harmony among security postures can expose serious problems. Harmonizing the security posture across the entire hybrid cloud builds a fabric of protection that helps keep “bad guys” from entering through the weakest link. Enterprise security management from the top down allows enterprises to achieve consistency.

Step 2: Create visibility through a single pane of glass

If hackers really want to attack you, they will touch your network at different app ports, and generate a lot of network activity. If your data is siloed, you might not notice this surge and could miss a leading indicator of a potential security attack.

Enclaves of data (apps, network, security) should be fused into a data lake to allow accurate security insights across the entire cloud estate. Your enterprise can impose AI or machine learning capabilities into the data lake, and IT Ops data and AI Ops data can be tools for making better business decisions. This aggregated visibility capability, known as a “single pane of glass,” helps enable detection, assessment and resolution of security anomalies with high velocity.

Remember, in a hybrid cloud ecosystem, security is more than just the security function: it’s central to your business. You need the rights to harvest these data through good terms and conditions with your cloud provider.

Step 3: Leverage AI to predict vulnerabilities

The single pane of glass is more powerful if we can also make better, faster sense of what we’re seeing. AI, machine learning and automation can ingest high volumes of complex security data, enabling near-real-time threat detection and prediction. AI tools can be “trained” to detect cyberattack patterns that have preceded incidents in the past. When those patterns recur, AI can trigger alerts or even provide actions for self-healing well before a human operator could detect and act upon a potential incident.

With security talent challenges and 3.5 million available security jobs, leveraging advanced tech automation and AI machine learning allows enterprises to find new ways to put security first with skill and velocity.

It’s time to embrace the transformational power of security to keep up with the demands of the modern world. To master hybrid cloud, you need to develop a unified security program that steers business initiatives, optimizes security resources and transforms your operating culture to be Security First.

Source: ibm.com