Category: Security in Space

Security in Space

Cybersecurity, GRC and Auditing Intelligent Systems

Every day we wake up, and before engaging with another human being, we either passively or actively engage with an algorithm connected to a sensing system. These ‘thinking systems’ acquire, interpret and measure our presence – our interaction with the physical world, our biological information and the way we feel. 


They regulate, in some fashion, almost every aspect of our daily lives and form a critical component of the ecosystems designed to deliver goods and services anytime, anywhere.


To meet a globalized demand for near-instant access to goods and services, organizations heavily instrument each component of their inordinately complex supply chains. They leverage machine learning (ML) models or trained algorithms to make decisions geared toward increasing efficiencies while maintaining quality and customer satisfaction. 


Humans are a part of these supply chain processes, and automated systems are increasingly being used to monitor and supervise workforces, often emulating the functions of human managers, instructing people on how to conduct themselves before, during and after their job. To increase productivity, employee and workforce monitoring systems move to collect more biometric information to boost employee engagement and improve safety.  


US legal and regulatory frameworks have recently been established to help regulate the future of algorithmic work. For example, California just passed a law designed to govern and regulate transparency, fairness and safety around warehouse quota or monitoring systems. While this law deals specifically with warehouse distribution centers, as more algorithms are used to manage workforces, people’s day-to-day activities, and their pay, there will be increased scrutiny across the board.


The rise of algorithmic work regulations is forcing enterprise GRC capabilities to rethink their ML models’ risk and control structures, starting with making them explainable.  


Explainable AI (XAI) is the concept that an ML model and its output must, at every single stage, be explained in a way that is interpretable or understandable to an average person. Making ML models explainable isn’t just a reaction to a regulatory requirement; XAI improves the end-user experience of a product or service, increases trust and ultimately improves quality.   


Making ML models explainable isn't just a reaction to a regulatory requirement
Making ML models explainable isn’t just a reaction to a regulatory requirement

Regulatory agencies increasingly require companies to disclose more about these systems to combat the perception that algorithms are black-box systems that cannot be explained.

The Information Commissioner’s Office (ICO), a UK-based independent authority for data privacy, and The Alan Turing Institute partnered to conduct extensive research around making AI explainable. They have devised a framework that describes six different types of explainability: 


  1. Rational explanation: What were the reasons that led the system to arrive at its decision?  
  2. Responsibility explanation: Who are all the team members involved in designing, developing, managing and implementing an AI system? This includes defining the contact information for requests to have a human review an AI-driven or assisted decision.
  3. Data explanation: This explanation is critical. Here we document what data was used in a decision and how. This includes a description of the training datasets.
  4. Fairness explanation: This is where we discuss bias. The fairness explanation looks at the design and implementation of an AI system to ensure that its decisions are unbiased and fair and the individual has been treated equitably.  
  5. Safety and performance explanation: What are the steps taken across the system that maximize the accuracy, reliability, security and safety around its decisions and behaviors?
  6. Impact explanation: What are the controls in place to both consider and monitor the impacts that the use of an AI system and its decisions has or may have on an individual, the workforce or even broader society?


While building an explainable system, it is vital that we adhere to a set of guiding principles. Therefore, ICO leveraged the principles outlined in GDPR as inspiration for the four principles for making an AI system explainable: 


  1. Be transparent: Fully and truthfully documented processes around your company’s use of AI-enabled decisions; when and why.
  2. Be accountable: Employees who govern the oversight of the “explainability” requirements of an AI decision system must ensure that explainability requirements are present in the design and deployment of the models.
  3. Consider context: When planning on using AI to help make decisions about your workforce, you should consider the setting in which you will do this, the sector and use case contexts and the contextual factors around how you would deliver explainability to the impacted users.
  4. Reflect on impacts: Build and deploy your ML system to consider impacts in areas such as physical, emotional, sociological, impact on free will, privacy and implications on future generations


While understanding the principles and types of explainability are a good starting point, cybersecurity and GRC workforces must increase their competence in auditing, assessing, protecting and defending artificially intelligent systems. We have quite a way to go.


This year, ISACA issued cornerstone studies on the state of cybersecurity and GRC workforces. These studies highlighted challenges, including a widening skills gap, issues in gaining access to a pipeline of qualified applicants and budget reduction. As security and GRC organizations must be responsive, we have turned to AI to help mitigate our risks.


Key Highlights 

  • The use of ML or robotic process automation/RPA in security operations is increasing. Roughly 34% of the respondents stated that they use AI in SecOps – up four percentage points from a year ago.
  • Over a fifth (22%) of the respondents have increased their reliance on artificial intelligence or automation to help decrease their cybersecurity skills gap. This compounds the issue as we lack the skills to govern, protect and defend these new AI systems.  
  • While these technologies are not yet replacing our human resources, they may instead shift the types of resources required. For example, AI may broadly decrease the number of analysts needed; however, human resources will be reallocated to designing, monitoring and auditing algorithms. 
“We must begin building and training on AI risk, controls, and audit frameworks, and train our personnel in the field”

Given that GRC and cybersecurity organizations have the mandate to govern, assess, protect and defend both enterprise use of AI/ML systems and our own use in areas such as SecOps, we must begin building and training on AI risk, controls and audit frameworks, and train our personnel in the field.


Key steps to help GRC professionals establish a framework for auditing and assessing compliance around algorithms include: 


  1. Document all six explainability types for your algorithm. What are the processes that streamline the complete transparency and accountability of your AI model? Describe the setting and industry in which the AI model will be used and how this affects each of the six types of explainability.
  2. Document all data collection and preprocessing activities. How representative is the data about those impacted by the AI system? Where was the data obtained, how was it collected and is it aligned with the purpose for which it was initially collected? Is the system using any synthetic data? How was it created? Do any of the datasets involve any protected characteristics? Did the team detect any bias or determine if the involved data sets reflect past discrimination? How did they mitigate it?  
  3. Assessment of the entire AI team’s diversity. Was the team that was involved in the system design diverse? Did the team reflect the user population the algorithm serves? Was there anyone on the team who is neurodiverse? Was there an evaluation to determine if a more diverse team would design a system more resilient to bias?
  4. Assess all documentation, processes and technology in place to ensure the system is built to extract relevant information for explainability. Is the system explainable by design? In selecting the AI model, did the team consider the specific type of application and the impact of the model on decision recipients? What is the total cost of ownership for the AI system, and is it cheaper than the previous potentially more explainable system? For systems leveraging social, identity or biometric information, did the team seek to make interpretability a key requirement? If the organization has chosen to leverage a ‘black box’ system, did the team document the rationale? How are the models tested, and does the change management process include model updates and versioning records? Who is responsible for validating the explainability of our AI system?
  5. Document and validate the team’s rationale for the AI system’s results. How is the team visually representing the logic of the system’s output? What tools are being used to present the results in a way that is interpretable to our workforce? 
  6. Define how your organization prepared implementers to deploy the AI system. How has your organization trained your AI system implementers? Can they detect where bias may occur and how to mitigate it? 
  7. How did the organization incorporate security into the design? Did the organization perform a system-level risk assessment? Is there a risk and control matrix for the ecosystem? Did the team create a threat model and define the application and ecosystem security requirements? How was the system penetration tested? What was the secure code review process and what tools were used? What were all the types of attacks identified, and what security logging, monitoring and defense patterns were created? Can the defense systems successfully detect AI/ML system-specific attacks – for example, data poisoning attacks? What are the incident response and forensics processes and playbooks should the system be breached?
  8. Document the roles and responsibilities in the development of your algorithm. For example, strategist, product manager, designer, architect, AI development team, implementer, AI operations team, security and compliance, senior and executive management. Was everyone adequately trained?
  9. Define and review all subjects that documentation needs to reflect. For example, the decision to design the system, describing the explanation types and how the principles were applied, data collection and acquisition process, data preprocessing, model selection, building, testing and monitoring. What are the tools used for selecting an explanation and how explanations will be delivered to requestors? What is the compliance policy, the risk and control matrix and the entire security plan for the AI system?

As more laws pass that force organizations to be transparent, AI systems must be designed for explainability before the first line is written. Remember, if the team is unable to explain the entire system in easy-to-understand terms, the system is not well designed.


Key References


We leveraged co-badged guidance by the ICO and The Alan Turing Institute, which aims to give organizations practical advice to help explain the processes, services and decisions delivered or assisted by AI to the individuals affected by them. 



Examples of laws and regulations around AI


Lawsuits involving Artificial Intelligence

Another study, The Ernst & Young 2021 Empathy in Business Survey, tells us there is a danger in underestimating the importance of empathy.


Here are some of their findings: 

We are seeing the consequences of this mindset gap. We all know about ‘The Great Resignation’ happening in the United States. This is now a global phenomenon. According to the ISACA State of Cybersecurity 2022 Study, The Great Resignation continues to significantly impact our global workforce. A full 60% of respondents reported difficulties retaining qualified cybersecurity professionals, up seven percentage points from 2021.
“The Great Resignation continues to significantly impact our global workforce”
Two of the top five reasons cybersecurity professionals leave their jobs are high work stress levels (45%) and a lack of management support (34%). In an industry where the battle for cybersecurity professionals is intense, the Ernst & Young survey is prescient. According to the study, there are many benefits to leading with empathy. Responses like this tell us why:
Clearly, ISACA’s report reveals the cybersecurity industry’s apathy towards empathy, while the other studies illuminate the positive outcomes for an organization where leaders are empathetic. So, where is the disconnect for us? Let’s look at another side of cyber activity to determine the answer. Cyber villains are diverse by design, and that diversity affords them a constant infusion of different ways of thinking. Attackers understand that compromising the user is the fastest way to access the information or resources they are targeting. And to compromise a user, you need to understand their emotional state. The ISACA report indicates that the predominant attack types leveraged as part of a compromise were:
Note that the top two mechanisms of attack leverage involve a significant understanding of the users’ emotional state. The attackers choose to hone in on our emotional weaknesses and exploit us. They leverage their understanding of how we will react to certain situations. The very emotion that we as an industry deemed unworthy as a critical skill is the single greatest mechanism by which we get exploited. And exploiting away they are! A 2021 Data Breach Report by Verizon concludes that:
So, how is it that threat actors across the board can manipulate us through our emotions, yet empathy is considered to be one of our industry’s least important skills? We know the importance of empathy in the business world. We can see the impact on workforces both when we lack and when we embrace empathy at the leadership level. At the same time, we see how threat actors wield empathy as a way to take advantage of us. We need to stop thinking that empathy is not important! But how do we improve empathy? Some people are naturally empathetic – unfortunately, not most of us. It is difficult to put yourself in another person’s position without bias and look at the world unvarnished through their eyes. On the good side, others become empathetic through diverse lived experiences and meaningful exposure to different people. Without a doubt, diversity improves empathy. The bad news is that we are not diverse as an industry, and less than 12% of industry professionals responding to the ISACA survey are under 34 years old. This is staggering. It means the generation most in tune with empathy is barely represented in our workforce. Combine this with well below half of our workforce being women and people of color, and we are at a distinct disadvantage in effectively nurturing empathy. The solution? We need more diversity in the cyber industry, plain and simple. The more diverse we become, the more empathetic we will be as an industry. The writing is on the wall. We just need to put action to our words!
Astronauts arriving for press conference
Security in Space

Time for Infosec Professionals’ Imaginations to Stretch to Outer Space

On Friday, April 16, NASA announced that it had selected SpaceX to move forward in building the first modern human landing system (HLS), returning humans to the surface of the Moon for the first time in nearly 50 years.
This marks a dramatic step toward sustainable lunar exploration and preparation for the ultimate journey of a human-crewed mission to Mars.
NASA stated: “The exploration of the Moon and Mars is intertwined. The Moon provides an opportunity to test new tools, instruments and equipment that could be used on Mars, including human habitats, life support systems, and technologies and practices that could help us build self-sustaining outposts away from Earth.”

Interplanetary exploration will rely on a complex supply-chain network from terrestrial/on-ground to low earth orbit onto the Moon, Mars and beyond. This new interplanetary supply chain will exploit the same emergent technologies that have given rise to the disruptive forces that mark our entrance to the 4th Industrial Revolution. Cloud, artificial intelligence, blockchain and additive manufacturing are already forming the core foundational components of the architectures that enable space technologies to be delivered and funded turnkey “as a service,” allowing for democratization of space and space data access, significantly lowering the barrier to entry. Bank of America expects the space industry to triple to a US$1.4 trillion market within a decade, forecasting the industry’s revenue growth by 230% – from about $4.2 billion in 2019 to about $1.4 trillion in 2030. 

For the space economy to exploit its full potential, a scalable, extensible, resilient and secure infrastructure of orbital communication and transportation services is being created, giving rise to the “space for space” economy where goods and services are built “in space for space.”

Yet, with all advancements, there is risk. The value of the digital and physical cargo to be transported is immense. Assets mined on planets and small bodies may be worth more than the total value of the Earth’s current economy. The intellectual property digitally transported across these complex supply chains will provide nations and companies with an incalculable competitive advantage. And the same architectures that support terrestrial-based digital supply chains will be just as exploitable as those in space.

With disruption comes opportunity, and attackers are better and faster than us at adapting to, leveraging and exploiting disruption. In a future where speed and agility are defining factors, they have the edge.
Currently, there is a race to develop offensive space capabilities designed to intercept, deny service or alter satellite communications. Organized underground groups will be ready, armed, and able to execute cyber-attacks against space transportation systems to enable the hijacking of cargo, abducting people and holding them for ransom or intercepting and stealing digital-based intelligence.
The cloud-based architectures that will underpin interplanetary commercial transportation and services will be exploitable by a range of different threat actors. And while countries and corporations alike are developing capabilities to detect, predict and defend against these attacks, they lack a consistent and comprehensive framework.

In 2020, the US government published the policy directive, Cybersecurity Principles for Space Systems, that outlined five main principles: 

While these principles and the resultant application of information security frameworks such as NIST, ISO 27001, or SOC 2 Type 2 across the entirety of space supply chains is a good first step, the design for how we approach security around these systems will need to transform. We will need to be better, faster and more adaptable. And, while the use of artificial intelligence and thinking systems will be prevalent, we will need to be prepared to see cybersecurity and defense personnel aboard spacecraft.

Information security and GRC professionals need to expand our knowledge and, quite frankly, imagination to include the applied sciences involved in space. We have to become more experienced in life safety systems. AI needs to be foundational to all cybersecurity and GRC professionals’ training as we will be working alongside thinking systems in harsh environments where there are microseconds between life or death.  

Which brings me to diversity. We have no real idea what type of person will be best suited for interplanetary travel or outpost settlements. Make no mistake – one we leave this planet for another’s destination, we will begin to evolve and evolution requires diversity.
If we are to protect and defend the people, companies, and countries in our charge, we will need racial, gender, identity, physical and neuro-diversity.
There is a high degree of likelihood that the attributes that make someone successful here on Earth may not be well-suited on another planet. People who think outside of the box may be the ones to thrive.
Leaders and futurists have predicted we may see the first human on Mars in the next 5-10 years, with colonization to happen soon thereafter. We sit at the dawn of interplanetary travel. As we embark on this next phase in human history, it is critical that we consider the end-to-end risks involved in the development of these new economies and the diversity in our workforce necessary to help protect and defend the people, goods and services that comprise the new space ecosystems.
error: Content is protected !!