GRCIE

Security in Space

Cybersecurity, GRC and Auditing Intelligent Systems

Every day we wake up, and before engaging with another human being, we either passively or actively engage with an algorithm connected to a sensing system. These ‘thinking systems’ acquire, interpret and measure our presence – our interaction with the physical world, our biological information and the way we feel. 

 

They regulate, in some fashion, almost every aspect of our daily lives and form a critical component of the ecosystems designed to deliver goods and services anytime, anywhere.

 

To meet a globalized demand for near-instant access to goods and services, organizations heavily instrument each component of their inordinately complex supply chains. They leverage machine learning (ML) models or trained algorithms to make decisions geared toward increasing efficiencies while maintaining quality and customer satisfaction. 

 

Humans are a part of these supply chain processes, and automated systems are increasingly being used to monitor and supervise workforces, often emulating the functions of human managers, instructing people on how to conduct themselves before, during and after their job. To increase productivity, employee and workforce monitoring systems move to collect more biometric information to boost employee engagement and improve safety.  

 

US legal and regulatory frameworks have recently been established to help regulate the future of algorithmic work. For example, California just passed a law designed to govern and regulate transparency, fairness and safety around warehouse quota or monitoring systems. While this law deals specifically with warehouse distribution centers, as more algorithms are used to manage workforces, people’s day-to-day activities, and their pay, there will be increased scrutiny across the board.

 

The rise of algorithmic work regulations is forcing enterprise GRC capabilities to rethink their ML models’ risk and control structures, starting with making them explainable.  

 

Explainable AI (XAI) is the concept that an ML model and its output must, at every single stage, be explained in a way that is interpretable or understandable to an average person. Making ML models explainable isn’t just a reaction to a regulatory requirement; XAI improves the end-user experience of a product or service, increases trust and ultimately improves quality.   

 

Making ML models explainable isn't just a reaction to a regulatory requirement
Making ML models explainable isn’t just a reaction to a regulatory requirement

Regulatory agencies increasingly require companies to disclose more about these systems to combat the perception that algorithms are black-box systems that cannot be explained.

The Information Commissioner’s Office (ICO), a UK-based independent authority for data privacy, and The Alan Turing Institute partnered to conduct extensive research around making AI explainable. They have devised a framework that describes six different types of explainability: 

 

  1. Rational explanation: What were the reasons that led the system to arrive at its decision?  
  2. Responsibility explanation: Who are all the team members involved in designing, developing, managing and implementing an AI system? This includes defining the contact information for requests to have a human review an AI-driven or assisted decision.
  3. Data explanation: This explanation is critical. Here we document what data was used in a decision and how. This includes a description of the training datasets.
  4. Fairness explanation: This is where we discuss bias. The fairness explanation looks at the design and implementation of an AI system to ensure that its decisions are unbiased and fair and the individual has been treated equitably.  
  5. Safety and performance explanation: What are the steps taken across the system that maximize the accuracy, reliability, security and safety around its decisions and behaviors?
  6. Impact explanation: What are the controls in place to both consider and monitor the impacts that the use of an AI system and its decisions has or may have on an individual, the workforce or even broader society?

 

While building an explainable system, it is vital that we adhere to a set of guiding principles. Therefore, ICO leveraged the principles outlined in GDPR as inspiration for the four principles for making an AI system explainable: 

 

  1. Be transparent: Fully and truthfully documented processes around your company’s use of AI-enabled decisions; when and why.
  2. Be accountable: Employees who govern the oversight of the “explainability” requirements of an AI decision system must ensure that explainability requirements are present in the design and deployment of the models.
  3. Consider context: When planning on using AI to help make decisions about your workforce, you should consider the setting in which you will do this, the sector and use case contexts and the contextual factors around how you would deliver explainability to the impacted users.
  4. Reflect on impacts: Build and deploy your ML system to consider impacts in areas such as physical, emotional, sociological, impact on free will, privacy and implications on future generations

 

While understanding the principles and types of explainability are a good starting point, cybersecurity and GRC workforces must increase their competence in auditing, assessing, protecting and defending artificially intelligent systems. We have quite a way to go.

 

This year, ISACA issued cornerstone studies on the state of cybersecurity and GRC workforces. These studies highlighted challenges, including a widening skills gap, issues in gaining access to a pipeline of qualified applicants and budget reduction. As security and GRC organizations must be responsive, we have turned to AI to help mitigate our risks.

 

Key Highlights 

  • The use of ML or robotic process automation/RPA in security operations is increasing. Roughly 34% of the respondents stated that they use AI in SecOps – up four percentage points from a year ago.
  • Over a fifth (22%) of the respondents have increased their reliance on artificial intelligence or automation to help decrease their cybersecurity skills gap. This compounds the issue as we lack the skills to govern, protect and defend these new AI systems.  
  • While these technologies are not yet replacing our human resources, they may instead shift the types of resources required. For example, AI may broadly decrease the number of analysts needed; however, human resources will be reallocated to designing, monitoring and auditing algorithms. 
“We must begin building and training on AI risk, controls, and audit frameworks, and train our personnel in the field”

Given that GRC and cybersecurity organizations have the mandate to govern, assess, protect and defend both enterprise use of AI/ML systems and our own use in areas such as SecOps, we must begin building and training on AI risk, controls and audit frameworks, and train our personnel in the field.

 

Key steps to help GRC professionals establish a framework for auditing and assessing compliance around algorithms include: 

 

  1. Document all six explainability types for your algorithm. What are the processes that streamline the complete transparency and accountability of your AI model? Describe the setting and industry in which the AI model will be used and how this affects each of the six types of explainability.
  2. Document all data collection and preprocessing activities. How representative is the data about those impacted by the AI system? Where was the data obtained, how was it collected and is it aligned with the purpose for which it was initially collected? Is the system using any synthetic data? How was it created? Do any of the datasets involve any protected characteristics? Did the team detect any bias or determine if the involved data sets reflect past discrimination? How did they mitigate it?  
  3. Assessment of the entire AI team’s diversity. Was the team that was involved in the system design diverse? Did the team reflect the user population the algorithm serves? Was there anyone on the team who is neurodiverse? Was there an evaluation to determine if a more diverse team would design a system more resilient to bias?
  4. Assess all documentation, processes and technology in place to ensure the system is built to extract relevant information for explainability. Is the system explainable by design? In selecting the AI model, did the team consider the specific type of application and the impact of the model on decision recipients? What is the total cost of ownership for the AI system, and is it cheaper than the previous potentially more explainable system? For systems leveraging social, identity or biometric information, did the team seek to make interpretability a key requirement? If the organization has chosen to leverage a ‘black box’ system, did the team document the rationale? How are the models tested, and does the change management process include model updates and versioning records? Who is responsible for validating the explainability of our AI system?
  5. Document and validate the team’s rationale for the AI system’s results. How is the team visually representing the logic of the system’s output? What tools are being used to present the results in a way that is interpretable to our workforce? 
  6. Define how your organization prepared implementers to deploy the AI system. How has your organization trained your AI system implementers? Can they detect where bias may occur and how to mitigate it? 
  7. How did the organization incorporate security into the design? Did the organization perform a system-level risk assessment? Is there a risk and control matrix for the ecosystem? Did the team create a threat model and define the application and ecosystem security requirements? How was the system penetration tested? What was the secure code review process and what tools were used? What were all the types of attacks identified, and what security logging, monitoring and defense patterns were created? Can the defense systems successfully detect AI/ML system-specific attacks – for example, data poisoning attacks? What are the incident response and forensics processes and playbooks should the system be breached?
  8. Document the roles and responsibilities in the development of your algorithm. For example, strategist, product manager, designer, architect, AI development team, implementer, AI operations team, security and compliance, senior and executive management. Was everyone adequately trained?
  9. Define and review all subjects that documentation needs to reflect. For example, the decision to design the system, describing the explanation types and how the principles were applied, data collection and acquisition process, data preprocessing, model selection, building, testing and monitoring. What are the tools used for selecting an explanation and how explanations will be delivered to requestors? What is the compliance policy, the risk and control matrix and the entire security plan for the AI system?

As more laws pass that force organizations to be transparent, AI systems must be designed for explainability before the first line is written. Remember, if the team is unable to explain the entire system in easy-to-understand terms, the system is not well designed.

 

Key References

 

We leveraged co-badged guidance by the ICO and The Alan Turing Institute, which aims to give organizations practical advice to help explain the processes, services and decisions delivered or assisted by AI to the individuals affected by them. 

 

 

Examples of laws and regulations around AI

 

Lawsuits involving Artificial Intelligence

Another study, The Ernst & Young 2021 Empathy in Business Survey, tells us there is a danger in underestimating the importance of empathy.

 

Here are some of their findings: 

We are seeing the consequences of this mindset gap. We all know about ‘The Great Resignation’ happening in the United States. This is now a global phenomenon. According to the ISACA State of Cybersecurity 2022 Study, The Great Resignation continues to significantly impact our global workforce. A full 60% of respondents reported difficulties retaining qualified cybersecurity professionals, up seven percentage points from 2021.
“The Great Resignation continues to significantly impact our global workforce”
Two of the top five reasons cybersecurity professionals leave their jobs are high work stress levels (45%) and a lack of management support (34%). In an industry where the battle for cybersecurity professionals is intense, the Ernst & Young survey is prescient. According to the study, there are many benefits to leading with empathy. Responses like this tell us why:
Clearly, ISACA’s report reveals the cybersecurity industry’s apathy towards empathy, while the other studies illuminate the positive outcomes for an organization where leaders are empathetic. So, where is the disconnect for us? Let’s look at another side of cyber activity to determine the answer. Cyber villains are diverse by design, and that diversity affords them a constant infusion of different ways of thinking. Attackers understand that compromising the user is the fastest way to access the information or resources they are targeting. And to compromise a user, you need to understand their emotional state. The ISACA report indicates that the predominant attack types leveraged as part of a compromise were:
Note that the top two mechanisms of attack leverage involve a significant understanding of the users’ emotional state. The attackers choose to hone in on our emotional weaknesses and exploit us. They leverage their understanding of how we will react to certain situations. The very emotion that we as an industry deemed unworthy as a critical skill is the single greatest mechanism by which we get exploited. And exploiting away they are! A 2021 Data Breach Report by Verizon concludes that:
So, how is it that threat actors across the board can manipulate us through our emotions, yet empathy is considered to be one of our industry’s least important skills? We know the importance of empathy in the business world. We can see the impact on workforces both when we lack and when we embrace empathy at the leadership level. At the same time, we see how threat actors wield empathy as a way to take advantage of us. We need to stop thinking that empathy is not important! But how do we improve empathy? Some people are naturally empathetic – unfortunately, not most of us. It is difficult to put yourself in another person’s position without bias and look at the world unvarnished through their eyes. On the good side, others become empathetic through diverse lived experiences and meaningful exposure to different people. Without a doubt, diversity improves empathy. The bad news is that we are not diverse as an industry, and less than 12% of industry professionals responding to the ISACA survey are under 34 years old. This is staggering. It means the generation most in tune with empathy is barely represented in our workforce. Combine this with well below half of our workforce being women and people of color, and we are at a distinct disadvantage in effectively nurturing empathy. The solution? We need more diversity in the cyber industry, plain and simple. The more diverse we become, the more empathetic we will be as an industry. The writing is on the wall. We just need to put action to our words!
Uncategorized

The Rise of VR and the Transformation of the Cybersecurity Capability

Humans have been captivated by stories of being transported to sprawling virtual worlds since the beginning of the golden age of science fiction. At its dawn in 1935, writer Stanley Weinbaum first conceptualized virtual reality (VR) in the short story Pygmalion’s Spectacles. In this story, a professor invents a pair of goggles that enables the wearer to immerse themselves in “a movie that gives one sight and sound… taste, smell, and touch… You are in the story, you speak to the shadows (characters) and they reply, and instead of being on a screen, the story is all about you, and you are in it.”

 

Stanley’s imagination would have to wait 30 years before cinematographer Morton Heilig created the first VR system. In the 1960s, Heilig built the Sensorama, “a telescopic television apparatus for individual use,” in which “the spectator is given a complete sensation of reality, i.e., moving three-dimensional images which may be in color, with 100% peripheral vision, binaural sound, scents and air breezes.” While his invention did not enjoy any commercial success, it paved the way for modern VR systems.

Source: Morton Heilig [Public Domain] / Wikimedia Commons

Over the decades, VR continued to evolve, viewed mainly as an emergent technology, focused heavily on gaming systems, military applications and niche educational or employer training systems.

All of this changed recently as widespread consumer demand and an increase in enterprise adoption have moved science fiction into reality.

Studies performed by eMarketer indicate that almost 59 million people in the US (or 17.7% of the US population) will use VR at least once a month. Coupled with the explosive surge in consumer demand, PWC’s Global Entertainment & Media practice predicts VR as the fastest-growing content segment from 2020 to 2025, with revenues rising by 30%.

This growth extends well beyond the consumer market. VR is expected to transform significant aspects of enterprise markets as well, as 77% of companies believe that they will increase their spending in VR over the next five years, with an elevated focus on transforming workforce training and improving efficiencies in areas such as engineering and the supply chain.

VR provides organizations with the ability to provide rich, immersive, life-like interactions and experiences, enabling users to create entirely new approaches to interaction and human connection.

As we begin to actualize this new age of unprecedented disruption, VR brings forth possibilities that were never previously imagined. Through creativity and imagination, cybersecurity organizations can benefit from this transformation.

Leveraging VR to Transform the Cybersecurity Capability

The ecosystems in which VR systems operate are commonly referred to as a Metaverse. At its core, these environments are interconnected, hyper-instrumented worlds infused with artificially intelligent thinking systems that cross the digital, biological and physical worlds. This intersection and the accompanying speed and technological development are exerting profound changes for which cybersecurity and GRC workforces are ill-prepared. 

 

ISACA’s State of Cybersecurity 2021 study illustrates this best: “Roughly 61% of all respondents report understaffed organizations. Filling technical individual contributor positions is difficult as only 50% of applicants are well qualified for the positions. With 4 million cybersecurity jobs open globally, it’s critical that we completely transform how we train and upskill our workforce with a special focus on our human skills and mastery of security controls.”

 

VR offers a real opportunity for us to take a step back and redesign a cybersecurity and GRC user experience. Like all skills, cybersecurity protection and defense capabilities are predicated on a few essential requirements.  

Leveraging VR to Transform the Cybersecurity Capability

With the very real adoption of metaverses, cybersecurity skills must now cross into virtual worlds. While the industry is starting discussions around how we define cybersecurity roles and frameworks – the reality is that VR offers real opportunities in the way we design, the way we train and the way we operate. 

“With the very real adoption of metaverses, cybersecurity skills must now cross into virtual worlds”
Skilling and Learning: Where we Learn speed, Adaptability, Accuracy and Form

Creators of learning experiences in immersive environments have an almost unlimited ability to design and present interactive content that allows cybersecurity students to digest and apply knowledge quickly. Enterprise metaverse platforms such as EngageVR provide cybersecurity and GRC trainers with a highly configurable virtual environment to teach science and technology, human communication, teamwork and collaboration training, making learning more immersive and experiential and significantly reducing training times.  

3D modeling and mind-mapping applications such as Gravity Sketch and Noda give learning designers an immersive, experiential and even tactile platform for communicating complex 2D ideas into 3D. Activities such as threat modeling, application risk assessments and process modeling can be visualized by cybersecurity and GRC workers (and learners), allowing the user to learn and experiment in a safe environment.

Security Operations Centers 

Building security operations centers (SOCs) are expensive and require hardware and physical infrastructure investments. Since many cybersecurity roles are remote, hybrid or partially outsourced, replicating the SOC experience at home does not easily scale. While we are a distance away from the holographic interfaces we see on Iron Man, using current virtual environments gives creators the ability to offer an intermediary step in the creation of an “infinite office” or workspace that allows users to straddle between the virtual and physical world.  

Platforms such as vSpacial provide insights into how users can operate with multiple levels of various sized screens with the user sitting in the center of a 360-degree desktop. 

Risks and Considerations for Cybersecurity & GRC Workforce Transformation Efforts 

One cannot understate the early-stage of enterprise VR applications, which means that the security of the entire ecosystem is not always fully designed. Security and GRC organizations need to do proper due diligence when selecting providers and make decisions on the level of access to sensitive information or environments.  

A few things to keep in mind: 

Amy Webb, the CEO of the Future Today Institute, discusses our entrance into the Synthetic decade.  

She describes, “A deep push to develop synthetic versions of life is already underway. Synthetic media, such as AI-generated characters, have storylines. Humanlike virtual assistants will make our appointments and screen our calls. AI-powered digital assistants control homes and cars and next-gen network infrastructure speed adoption. Everyone alive today is being scored – we’re shedding data just by virtue of being alive. From the food we eat to the feelings we experience, everything over the next decade will be synthesized… blurring the line between what we consider real or virtual.

 

The cybersecurity and GRC community is at a crossroads. The accelerated pace of technological disruption is pushing organizations to redefine how we approach protection and defense. Designing cybersecurity of the future requires a willingness to explore how technology trends manifest in this future world and define the iterative steps necessary to protect and defend in a world composed of intelligent ecosystems.

 

Attackers are better than us at adapting to, leveraging and exploiting disruption. We operate in a world bound by rules. Their limits are their own creativity. 

 

It will take our own creativity and imagination to mold and shape our world of tomorrow.  

Security & Community

Empathy: The Overlooked Ingredient in Cybersecurity

Technological innovation is moving at the speed of life. We live in a world infused with artificially intelligent sensors that cross biological, physical and digital boundaries. Not surprisingly, cybersecurity and GRC workforces are struggling to keep pace. The people, processes and technologies that make our new world go round require a very different approach toward protection and defense.   The problems we have are primitive, systemic and require transformative thinking and approaches. To design and build the cybersecurity workforces of the future, we must have a clear understanding of our current state, which includes an analysis of our emotional state – a deeper dive into our humanity. 

Several organizations have analyzed the current state of our cyber workforce over the past year. Diving into that data uncovers some uncomfortable truths. The most important takeaway is that iteratively improving the existing workforce is not sufficient.  

ISACA’s State of Cybersecurity 2022: Global Update on Workforce Efforts, Resources and Cyberoperations Report gives much insight into our collective consciousness. The study asked respondents to identify the top five most important soft skills security professionals need today. The top two skills were communication (57%) and critical thinking (56%). There were also some disconcerting revelations. According to the report, the bottom two soft skills valued in the cybersecurity industry were empathy (13%) and honesty (16%). Plainly stated, we value communication and critical thinking, but we do not think empathy and honesty are important.  

The fact that we as cybersecurity professionals think that it is not necessary to be empathetic is frankly the most significant aha moment that any recent survey has invoked. It explains many of the systemic problems we are seeing and experiencing in the industry today.  

So, what exactly is empathy? The dictionary defines it as the capacity to understand or feel what another person is experiencing – the ability to figuratively step into another’s shoes to view the situation at hand.

As to why empathy is so important in cybersecurity, we need to view it from a leadership and cultural perspective. To further dive into this, we looked at Businessolver’s 2021 State of Workplace Empathy study. That research unearthed several key findings, all of which pointed to this fact: leaders are struggling to reconcile empathy gaps with employees.

Significant findings of the Businessolver study include: 

Another study, The Ernst & Young 2021 Empathy in Business Survey, tells us there is a danger in underestimating the importance of empathy.

 

Here are some of their findings: 

We are seeing the consequences of this mindset gap. We all know about ‘The Great Resignation’ happening in the United States. This is now a global phenomenon. According to the ISACA State of Cybersecurity 2022 Study, The Great Resignation continues to significantly impact our global workforce. A full 60% of respondents reported difficulties retaining qualified cybersecurity professionals, up seven percentage points from 2021.
“The Great Resignation continues to significantly impact our global workforce”
Two of the top five reasons cybersecurity professionals leave their jobs are high work stress levels (45%) and a lack of management support (34%). In an industry where the battle for cybersecurity professionals is intense, the Ernst & Young survey is prescient. According to the study, there are many benefits to leading with empathy. Responses like this tell us why:
Clearly, ISACA’s report reveals the cybersecurity industry’s apathy towards empathy, while the other studies illuminate the positive outcomes for an organization where leaders are empathetic. So, where is the disconnect for us? Let’s look at another side of cyber activity to determine the answer. Cyber villains are diverse by design, and that diversity affords them a constant infusion of different ways of thinking. Attackers understand that compromising the user is the fastest way to access the information or resources they are targeting. And to compromise a user, you need to understand their emotional state. The ISACA report indicates that the predominant attack types leveraged as part of a compromise were:
Note that the top two mechanisms of attack leverage involve a significant understanding of the users’ emotional state. The attackers choose to hone in on our emotional weaknesses and exploit us. They leverage their understanding of how we will react to certain situations. The very emotion that we as an industry deemed unworthy as a critical skill is the single greatest mechanism by which we get exploited. And exploiting away they are! A 2021 Data Breach Report by Verizon concludes that:
So, how is it that threat actors across the board can manipulate us through our emotions, yet empathy is considered to be one of our industry’s least important skills? We know the importance of empathy in the business world. We can see the impact on workforces both when we lack and when we embrace empathy at the leadership level. At the same time, we see how threat actors wield empathy as a way to take advantage of us. We need to stop thinking that empathy is not important! But how do we improve empathy? Some people are naturally empathetic – unfortunately, not most of us. It is difficult to put yourself in another person’s position without bias and look at the world unvarnished through their eyes. On the good side, others become empathetic through diverse lived experiences and meaningful exposure to different people. Without a doubt, diversity improves empathy. The bad news is that we are not diverse as an industry, and less than 12% of industry professionals responding to the ISACA survey are under 34 years old. This is staggering. It means the generation most in tune with empathy is barely represented in our workforce. Combine this with well below half of our workforce being women and people of color, and we are at a distinct disadvantage in effectively nurturing empathy. The solution? We need more diversity in the cyber industry, plain and simple. The more diverse we become, the more empathetic we will be as an industry. The writing is on the wall. We just need to put action to our words!
Supply Chain Security

Global Focus on Supply Chain Security Has Transformational Impacts for SMBs

Attacks on digital and physical supply chains are nothing new. If attackers cannot launch their assault against a series of well-fortified systems, focusing their attention on the more frictionless experiences offered by less secure trusted suppliers reduces their risk and yields dividends.

The problem with securing the highly interconnected systems between companies and their suppliers is that it is insanely difficult. One may look no further than the challenges the entertainment industry has in protecting content from leaking to understand the work necessary to protect an end-to-end supply chain. 

In this context, the SolarWinds breach was notable in a couple key ways:
So how does this impact small and medium-sized businesses (SMBs)?
Innovation and creativity are at the heart of the SMB. Their size and agility give them the ability to bring innovative solutions to a larger organization and, in turn, accelerate their client’s ability to drive creativity into their products and services faster. These SMBs often have access to sensitive systems and highly confidential information, placing them squarely on a company’s critical supplier list, subjecting them to the same rigorous security controls to which they themselves must adhere.
As a result, we see more SMBs building information security programs that can be certified or authorized by an external entity. Internal legal, GRC and procurement organizations are increasingly requiring organizations not just to comply with but build security programs that can be certified. Over the last several months, there has been a marked increase in SMBs engaging security firms to help them build programs based on ISO 27001 or SOC 2 Type 2 security frameworks.
Yet, SMBs have challenges. In the wake of a global pandemic, the world was forced to transition its workforce virtually overnight. SMBs found themselves especially ill-prepared to handle this monumental shift. The National Small Business Association testified before the U.S. Senate Committee on Small Business in March 2019, saying that “only 14% of small businesses rated their ability to mitigate cyber risk and vulnerabilities as useful.” Consider:
Many smaller companies are now completely virtual and plan to stay that way. With more and more companies hiring employees spread across the globe, cross-border hiring can be especially tricky. The security architectures and the control structures are very different for an organization with no physical presence. As a result, SMBs are analyzing technology solutions such as firewalls as a service, cloud-based business VPNs, and cloud application security platforms such as cloud access security brokering solutions (CASBs). While these technologies help provide a more comprehensive foundation for an SMB’s cloud security architecture, they often require heavy initial investments, requirements around large license counts, or don’t support SIEM integration out of the box.
There are also a few hidden issues. Many of these cloud security providers have not yet undergone security certifications, further limiting the number of vendors available to SMBs. This is especially the case in the cloud business VPN space, where vendors began providing business offerings, yet lag in certifying their information security programs.
Another challenge involves an organization’s log aggregation solutions and daily audit and log reviews. Most smaller organizations are neither trained nor staffed to design the patterns necessary to detect security incidents or data breaches. This is especially the case when collecting, aggregating and analyzing attacks across multiple cloud providers. While we are seeing an increase in managed security services providers that support the SMB market, they often drive organizations toward specific security architectures. Their solutions focus on organizations with a physical presence, lack support for Macs (popular with SMBs), and offer limited support for the analysis of the variety of cloud providers commonly used by smaller businesses.
However, solutions providers are catching up and vendors that offer comprehensive solutions to SMBs and undergo certifications for their own internal security programs are in greenfield territories as small companies become large companies and trusted partnerships with innovative organizations are a competitive advantage.
So, what recommendations do we have for SMBs?
Astronauts arriving for press conference
Security in Space

Time for Infosec Professionals’ Imaginations to Stretch to Outer Space

On Friday, April 16, NASA announced that it had selected SpaceX to move forward in building the first modern human landing system (HLS), returning humans to the surface of the Moon for the first time in nearly 50 years.
This marks a dramatic step toward sustainable lunar exploration and preparation for the ultimate journey of a human-crewed mission to Mars.
NASA stated: “The exploration of the Moon and Mars is intertwined. The Moon provides an opportunity to test new tools, instruments and equipment that could be used on Mars, including human habitats, life support systems, and technologies and practices that could help us build self-sustaining outposts away from Earth.”

Interplanetary exploration will rely on a complex supply-chain network from terrestrial/on-ground to low earth orbit onto the Moon, Mars and beyond. This new interplanetary supply chain will exploit the same emergent technologies that have given rise to the disruptive forces that mark our entrance to the 4th Industrial Revolution. Cloud, artificial intelligence, blockchain and additive manufacturing are already forming the core foundational components of the architectures that enable space technologies to be delivered and funded turnkey “as a service,” allowing for democratization of space and space data access, significantly lowering the barrier to entry. Bank of America expects the space industry to triple to a US$1.4 trillion market within a decade, forecasting the industry’s revenue growth by 230% – from about $4.2 billion in 2019 to about $1.4 trillion in 2030. 

For the space economy to exploit its full potential, a scalable, extensible, resilient and secure infrastructure of orbital communication and transportation services is being created, giving rise to the “space for space” economy where goods and services are built “in space for space.”

Yet, with all advancements, there is risk. The value of the digital and physical cargo to be transported is immense. Assets mined on planets and small bodies may be worth more than the total value of the Earth’s current economy. The intellectual property digitally transported across these complex supply chains will provide nations and companies with an incalculable competitive advantage. And the same architectures that support terrestrial-based digital supply chains will be just as exploitable as those in space.

With disruption comes opportunity, and attackers are better and faster than us at adapting to, leveraging and exploiting disruption. In a future where speed and agility are defining factors, they have the edge.
Currently, there is a race to develop offensive space capabilities designed to intercept, deny service or alter satellite communications. Organized underground groups will be ready, armed, and able to execute cyber-attacks against space transportation systems to enable the hijacking of cargo, abducting people and holding them for ransom or intercepting and stealing digital-based intelligence.
The cloud-based architectures that will underpin interplanetary commercial transportation and services will be exploitable by a range of different threat actors. And while countries and corporations alike are developing capabilities to detect, predict and defend against these attacks, they lack a consistent and comprehensive framework.

In 2020, the US government published the policy directive, Cybersecurity Principles for Space Systems, that outlined five main principles: 

While these principles and the resultant application of information security frameworks such as NIST, ISO 27001, or SOC 2 Type 2 across the entirety of space supply chains is a good first step, the design for how we approach security around these systems will need to transform. We will need to be better, faster and more adaptable. And, while the use of artificial intelligence and thinking systems will be prevalent, we will need to be prepared to see cybersecurity and defense personnel aboard spacecraft.

Information security and GRC professionals need to expand our knowledge and, quite frankly, imagination to include the applied sciences involved in space. We have to become more experienced in life safety systems. AI needs to be foundational to all cybersecurity and GRC professionals’ training as we will be working alongside thinking systems in harsh environments where there are microseconds between life or death.  

Which brings me to diversity. We have no real idea what type of person will be best suited for interplanetary travel or outpost settlements. Make no mistake – one we leave this planet for another’s destination, we will begin to evolve and evolution requires diversity.
If we are to protect and defend the people, companies, and countries in our charge, we will need racial, gender, identity, physical and neuro-diversity.
There is a high degree of likelihood that the attributes that make someone successful here on Earth may not be well-suited on another planet. People who think outside of the box may be the ones to thrive.
Leaders and futurists have predicted we may see the first human on Mars in the next 5-10 years, with colonization to happen soon thereafter. We sit at the dawn of interplanetary travel. As we embark on this next phase in human history, it is critical that we consider the end-to-end risks involved in the development of these new economies and the diversity in our workforce necessary to help protect and defend the people, goods and services that comprise the new space ecosystems.
Workforce Security

ISACA State of Security Part 1

State of Cybersecurity 2021, Part 1: Global Update on Workforce Efforts, Resources and Budgets reports the results of the annual ISACA global State of Cybersecurity Survey, conducted in the fourth quarter of 2020.
We’re joined by:
The report (www.isaca.org/state-of-cybersecurity-2021) focuses on the current trends in cybersecurity workforce development, staffing and cybersecurity budgets.
The issue of cybersecurity workforce deficiencies remains unresolved, despite years of reporting on this problem from numerous resources.
Cloud SecurityWorkforce Security

The Growing Field of Cloud Security and What it Means to You

Now that everything is in the cloud — who’s going to secure all of that data? Cloud security is a growing field inside a field with a great shortage of talent, so if you’re curious about cyber security, the cloud is a great place to start. Join us for this panel that will include moderated questions, as well as a Q&A portion — bring your questions and your curiosity.
Jenai Marinkovic – Executive Director – GRCIE & vCTO/CISO TiroSecurity
Jenai is a multi-disciplinary technologist and strategist with 20 years of experience in architecting, building, and securing systems at scale. She has designed and operated cybersecurity capabilities in live sports, gaming and entertainment, biomedical manufacturing, laboratory diagnostics, healthcare, and robotics in agriculture. She is an expert speaker for ISACA with a special focus on emergent technology and cybersecurity futurism trends. She has run architecture, innovation, engineering, security, and operations teams. Her security expertise spans security architecture, engineering, defense, and forensics. Jenai is a founding member of the NextCISO Apprenticeship, an organization dedicated to preparing women and people of color for positions in the GRC industry while identifying and cultivating potential CISOs at the onset of their careers.
Sara Tumbarella, CISSP – Sr. Cloud Security Engineer & Information Security Manager at Foghorn Consulting
Sara is a Sr. Cloud Security Engineer & Information Security Manager at Foghorn Consulting where she helps clients secure cloud systems, automate compliance, and navigate regulatory requirements. Prior to Foghorn, she was the Information Security Manager at SRS Acquiom based in Denver. She has a passion for helping companies build information security programs from the ground up that are tailored to meet their unique needs and requirements. Sara holds an M.S in Information Systems and Security and has been awarded the following certifications: CISSP, and AWS – Security Specialty.
Portrait of multiracial diverse young woman smiling and looking
Security & CommunityWorkforce Security

Cybersecurity, Community and Change: How to Meet the Coming Challenge

At the end of the great World War II, no industry was more vital than American steel. The global steel demand was voracious. European and Asian cities had been devastated by the bombings and needed to rebuild while American cities were booming. Besides rebuilding, steel was needed for everything from new cars to the new interstate highways under construction.
US steel mills were there to heed the call, producing more than half the world’s steel in the late 1940s and roughly 40% of the world’s steel throughout the 1950s. Four out of every 10 Americans made their living directly or indirectly from the industry. Steel companies were at the apex of an industrial power with virtually no manufacturing rivals for decades.
However, by the 1970s, the steel industry had begun its epic collapse. The American steel industry, believing itself invulnerable, was headed by a complacent and oftentimes insular management that was slow to bring in modern technology and respond to changing market conditions. In a labor-intensive industry like steel, that meant closing mills and massive, regional layoffs.
Towns in the shadow of these shuttered mills lost over 50% of their populations, leading to a collapse in their economic base. The impacts were devastating. High unemployment and the declining tax revenues led to a crippling of the education systems. Poverty increased, urban blight crept in and there was an amplification of the social injustice laid upon communities of color. To this day, the greater Midwest suffers as a result of being hard-coded into the technical debt-ladened architectures that make it near impossible to adapt or pivot out of a downward spiral.
But what happened in the 1970s cannot be traced to a singular reason. There was no sudden disruptive event that triggered the downward trajectory of steel. Any time you see an epic collapse in any system, you need to go bigger to understand the problem.
“Any time you see an epic collapse in any system, you need to go bigger to understand the problem”
But what happened in the 1970s cannot be traced to a singular reason. There was no sudden disruptive event that triggered the downward trajectory of steel. Any time you see an epic collapse in any system, you need to go bigger to understand the problem.
When you look at things at the macro level, we start to see that what happened to the steel industry was triggered because we were moving from the second to the third wave of the industrial revolution. The First Industrial Revolution really was a revolution. It gave rise to the invention of water and steam power leading to the industrial transformation of society with trains and mechanization of manufacturing. The Second Industrial Revolution is typically seen as the period where electricity and the assembly line led to mass production and to some extent to automation. The Third Industrial revolution had everything to do with the rise of computers, the rise of robotics in manufacturing, the birth of the internet and significantly more automation. The year 1969 marked the induction to the third industrial revolution. And it was at this time — almost to the year — that, looking back, we started to see the collapse of the steel industry.
The opening decade to any revolution in industry is always marked by what we call big bang disruptions. According to Forbes, “Big bang disruptions are large-scale fast-paced innovational waves that can disrupt stable businesses very rapidly. With big bang disruption, entire product lines — whole markets — can be rapidly obliterated as customers defect en masse and flock to a product that is better, cheaper, quicker, smaller, more personalized and convenient. Disrupters can come out of nowhere and go global very rapidly. Disruption can happen so quickly and on such a large scale that it is hard to predict or defend against.”
In 2020, we entered a new wave — the Fourth Industrial Revolution. You might have already guessed the focus. We are in a time marked by the convergence of the digital, physical and biological worlds, all with the additional accelerators such as advanced robotics and cognitive-thinking systems.
We opened this decade with the first of many big bang disruptions. The pandemic triggered a wave of automation, the extent of which we may not fully understand for years to come. Other disruptions are sure to follow.
In the era after World War II, American author and journalist John Gunther proudly proclaimed that “America is steel” because, at the time, the United States alone could produce more steel than Britain, West Germany, France, Japan and Russia combined. America was indeed “steel.”
That is what cloud is now; our new world’s steel. Cloud, like steel, undergirds the very fabric of society. It strengthens and interlinks the technology in our bodies, our buildings and all creatures great and small. It underpins our digital, biological and physical worlds. And as we enter this new age of thinking systems and move into this brave new world, it is critical that we understand our past.
Cloud, like steel, undergirds the very fabric of society
Just recently, the US government awarded SpaceX the contract to build the first modern human landing system (HLS), returning Americans to the surface of the moon for the first time in nearly 50 years. This marks a dramatic step toward sustainable lunar exploration and preparation for the ultimate journey of a human-crewed mission to Mars. And, even more recently, the first commercial rockets were launched into space with “space tourists.”

Leaders and futurists have predicted that we may see the first human on Mars in the next 5-10 years, with colonization to happen soon thereafter. This is not a Star Trek futuristic visioneering exercise. We sit at the dawn of interplanetary travel, and it is critical that we as an industry understand the implications of the biggest big bang disruption in history of our planet.

And with all disruption comes opportunity. All Game of Thrones fans know that “chaos is a ladder.” Attackers are better and faster than us at adapting to, leveraging and exploiting disruption. In a future where speed and agility are defining factors, they have the edge.
One of the downfalls of the steel industry was our collective inability to come together and tackle world-changing problems with world-changing thinking. We lacked a diversity by design mindset. We failed to understand that diversity was our strength — diversity enables resilience, adaptability and scalability. Diversity forces us to think outside of the box and create the conditions where we control big bang disruptions instead of succumbing to them.
Ultimately, monocultures die. When a monoculture dies, it wipes out or cripples everything in its wake. So, why a diversity discussion when talking about space? Because cybersecurity today is a monoculture. It’s why we are failing. It’s why we are losing this war. We are the same people we were 20 years ago. We do not even have to look at gender, identity and race; it’s more than that. Our experiences are the same. We all came up through systems administration, network engineering, application development or desktop support. We have the same skills and the same ways of thinking.
If we are to protect and defend the people, companies and countries in our charge, we will need racial, gender, identity, physical and neurodiversity. We will need creative problem-solvers and divergent thinkers. The only way to think outside of the box is to apply the learnings and insights from a diverse set of collective experiences and to do what humans do best: to connect and to share these experiences, and improve upon them. It takes community to truly innovate.
So, having taken a trip through our past and gazed forward into our future, we must ask ourselves: how prepared are we to enter this new age — the dawn of the Fourth Industrial Revolution?

Like the days of steel, the infrastructure that girds our digital critical infrastructure is fragile and it’s breaking. One needs look no further than the continuous reporting of supply chain breaches and ransomware demands to understand the state of security. When a security breach prevents a large swath of the United States from getting gasoline, we have a problem. And that’s minor in comparison to what could happen.

ISACA recently released its State of Cybersecurity 2021 Part 2: Threat Landscape, Security Operations and Cybersecurity Maturity report. A big conclusion in the report is that “business as usual is not working.” The report states: “Change is ever present for cybersecurity professionals who partner daily with business leaders to meet organizational goals amid growing regulatory requirements and a threatened landscape. Much has already changed since ISACA collected this data at the end of 2020. High-profile cyber-attacks, including those affecting SolarWinds, Microsoft and Colonial Pipeline, thrust cybersecurity to the forefront for government and business leaders, prompting new regulatory changes. Undoubtedly, there will be more.”
“One of our issues is that proper security is inordinately resource-intensive”
One of our issues is that proper security is inordinately resource-intensive. Regardless of the amount of automation we have at our fingertips, it is not enough. As we have learned with previous industrial revolutions, the opening years are wrought with disruption that impacts whole communities. The difference is that today’s disruptions too often have immediate, often widespread impact.
This new wave of automation is now upon us, and this time we expect to see massive job losses inside the services sector, the exact sector that manufacturing pivoted into when their jobs disappeared.
The World Economic Forum’s Future of Jobs Report 2020 supports this. The report states that:
While it might be exciting to think that 97 million new roles could emerge from this latest phase of the industrial revolution, we cannot ignore the fact that millions of people might be left behind. We need a new way of thinking to solve this, especially when it comes to cybersecurity. We are looking at millions of jobs opening up in security worldwide and no real plan for how to fill them.
“We are looking at millions of jobs opening up in security worldwide and no real plan for how to fill them”
Currently, we commonly do not hire people with little to no experience — “junior people” — in the cybersecurity field. Regardless of how many degrees, certificates, will or grit. It does not matter. We want people with 5–10 years of experience and a CISSP just for a junior role. Shame on us. We continue to let the ghosts of the past haunt us into the same group thinking decisions that nearly wiped out a region.
Cybersecurity professionals are working 100+ hour workweeks and killing ourselves to keep our executive leadership from having to testify in front of Congress because of problems that manifested under our jurisdiction. Yet, in this field, we resist hiring trainees.
Training a junior person takes 6–12 months before they can take work off our plates and, of course, makes us less productive in the short term. Yes, there is risk to bringing on a junior person. They do not always work out. But neither do some of the people who have had technical careers their whole life. And the investment lost is greater. We need to stop looking at people as junior, not technical enough or not experienced enough, and start looking at each person as a container of limitless potential with decades of collective experiences that will enable us to once and for all break outside the proverbial box.
Consider this: in 1966, an African-American nurse named Mary Van Brittan Brown, who spent many nights at home alone while her husband was away, and felt unsafe with high rates of crime and unresponsive police in her neighborhood, devised an early security unit for her own home. It involved a camera and a monitor to see who was outside the front door. This type of security system is now widely used in homes across the world.
There is no book we can read, no well-worn path that we can take to solve our cybersecurity staffing needs — it’s the greatest challenge of our collective lives. We will have to start from the beginning.
The answer lies in community. And we need more.

One in Tech, an ISACA Foundation

One In Tech is an organization that seeks to build a healthy digital world that is safe, secure and accessible for all. To combat barriers commonly faced by underrepresented groups, they built a suite of programs focused on children, women, people of color and those underserved socioeconomically and due to bias. Their objective is to build equity and diversity in the digital space. One In Tech provides three key programs designed to address global needs and provide programs with measurable impact:

The We Lead Tech program looks to amend the racial and cultural diversity imbalance within tech professions. The lack of diversity is incompatible with the values of the tech industry — innovation, creativity and diversity of thought. Hiring individuals who do not look, talk or think like their employers enables organizations to avoid costly pitfalls of conformity and results in more innovative thinking. ISACA collaborated with City Colleges of Chicago in the creation of this program. 

SheLeadsTech is a program that works to increase the representation of women in technology leadership roles and the tech workforce. Powered through a vast global network of women IT professionals dedicated to supporting others, SheLeadsTech provides women with mentorship, leadership training and skills training to grow and excel within that industry. This very robust program offers a number of opportunities for engagement, including the ambassador initiative, education and events, a mentorship program and a resource center.
The Young Leaders in Tech program focuses on under-resourced, disenfranchised children with the knowledge and skills to help them avoid online risks, build e-learning skills, and explore career pathways into the cybersecurity field. Young Leaders in Tech works to ensure the common barriers blocking equity are addressed so that youth will serve as the building blocks of a safe, knowledgeable, innovative and inclusive digital future. The program offers a suite of online and in-person educational initiatives for grades K-12.

The Next CISO — Building a Next Generation Cybersecurity Workforce, Diverse by Design

On a farm in Northern California, the idea was born for the NextCISO Program. Together, we partnered with Kris Rides, a cybersecurity recruitment specialist, to start an apprenticeship for people with no technical expertise to train them as Junior GRC analysts. We based it on the belief that a foundational understanding of GRC might allow someone to pivot into any other area of cybersecurity.
A foundational understanding of GRC might allow someone to pivot into any other area of cybersecurity
Working with a diverse group of people from across the country and from diverse experiences, we taught our students the fundamentals of GRC, ISO 27001, how to audit artificial intelligence and the fundamentals of design with an emphasis on human skills (soft skills). We put them on client work and, with a team of entirely junior people and one senior executive, we built an entire ISO 27001 compliant information security program. In addition to the technology aspect, it included service provider selection, security testing and assessment services, and auditing of cloud environments and defense. The duration was seven months; the pace was intense. “It would have made a drill sergeant proud,” claimed one graduate of the program.
ISACA’s State of Cybersecurity 2021 report digs into why hiring managers have low confidence in cybersecurity applicants. Interestingly, the report cited that the largest skills gap among cybersecurity professionals is soft skills — communication, flexibility and leadership — yet these are rarely considered in the hiring process. The second-largest skills gap cited was security controls implementation.
At Next CISO, we believe that part of our problem is where we’re looking for talent. Are you looking at your internal teams beyond just the IT team? Are you looking at Marketing? HR? Legal? QA? We have students from all of these experiences who have done very well in this program and will make wonderful GRC analysts. And now we are training three local people, all in front-line service jobs, into this new world of ours.
We need to start thinking differently. Not everyone needs to or should start as a SOC analyst. We all need to look at every neighbor about to lose their job to automation and ask, can you transition to infosec or an adjacent industry?
We at the local community level need to build local programs that reskill and upskill people into digital security careers. And every company that has a security workforce needs to start looking at their percentages. What if 40% of all your incoming roles were open to people with little to no experience but promising potential? How could you restructure to accommodate this change? If you are a mid-size company starting a security program, might you bring in one senior and one junior role? We think that not only you could, but you should. It can be done successfully. We proved it!
It’s time for new approaches. Remember, the Fourth Industrial Revolution is upon us.
Futuristic networking technology remix with woman using virtual
Workforce Security

ISACA State of Cybersecurity – Part 2

error: Content is protected !!