Our Approach
MITRE’s Guiding Principles for Insider Threat
“cyber-only” OR “cyber-first” is not enough
Insider threat is inherently a human challenge. Effective Insider Threat/Risk Programs deter, detect, and mitigate insider risk by combining human, organizational, cyber, and physical sensors and approaches. However, some organizations have developed programs that near-entirely focus on the cyber components of managing insider risks, by virtue of placing the program in the security operations center (SOC), information technology (IT), or cybersecurity department. This often leads to disappointment and wasted investment when the program is less effective, as it prioritizes cyber components over all other sensors and approaches.
Collecting more data creates the illusion that we are getting closer to a solution
More data will not inherently improve detection of insider risk and threats. The Insider Threat/Risk Program should only request and collect the specific data that is required to make decisions. The reason for this is twofold: limiting data collection can help demonstrate to important stakeholders (e.g., Human Resources, Legal) that employee data is being treated appropriately, and counters the perception of a ‘surveillance’ organization. Furthermore, limiting data collection can avoid the ‘data lake’ mistake, where more and more data is thrown at a problem with the hopes that a skilled data scientist may be able to solve the problem – very costly and to-date unsuccessful.
Insider threats ≠ Advanced Persistent Threats (APTs) or Compromised Accounts
The activities of external adversaries who use compromised accounts to act as masqueraders or impersonators do not look the same as activities by actual employees within the organization, as demonstrated with user activity monitoring (UAM) data. However, external adversaries can create insider threats by deploying an individual as an employee in your organization, or by influencing the employees in your workforce.
Case studies about “the last bad guy” are ineffective at finding “the next bad guy”
Individual case studies do not reach the bar for developing effective insider risk deterrence, detection, and mitigation indicators and capabilities. They are a distraction and lead to a “whack-a-mole” approach where organizations develop indicators and capabilities based on the nuances of a single case study, rather than genuine patterns of risk. Instead, effective insider risk management relies on systematically combining and analyzing dozens to hundreds of cases to characterize known bad behaviors and identify risk patterns.
Critically approach the “Critical Pathway to Insider Risk”
The use of the “Critical Pathway to Insider Risk” to proactively identify insider risk is limited: the critical path for insider threat cannot be used to make predictions, cannot be used to proactively detect malicious insiders, and cannot be used generate risk scores for insider threat programs. This approach was not developed scientifically, is not data-driven, and has not been validated empirically to be used predictively or for risk identification.
Counterintelligence perspectives are valued, but often overemphasized
Insider Threat/Risk Programs must adopt multiple perspectives in their teams, recommendations, and outputs. These multiple perspectives should be treated equally, rather than allow one perspective to dominate. The perspectives should include data scientists, behavioral scientists, cyber engineers, law enforcement, and counterintelligence. For some programs, counterintelligence has been overly dominant. Counterintelligence offers useful, unique perspectives to insider threat deterrence, detection, and mitigation in the context of espionage and foreign threats. Often, there is no foreign nexus for insider threat.
Social Media has yet to be proven as a source of good, usable data for insider threat
The use of social media for proactively identifying insider risk has not yet been demonstrated with meaningful data patterns. Theoretically, social media contains relevant data for identifying insider risk, but there are no techniques that surface the relevant data without surfacing a much larger set of irrelevant data that is impractical to triage. Instead, current techniques flag too much irrelevant social media content; which in turn, creates reputational and legal risk for the organization, while adding to the already high workload of insider threat analysts. Given the risk to organizations and their employees, we currently recommend against using social media to proactively identify insider risk until rigorous evidence is presented that clearly generalizes — beyond training datasets — and results in reduction of the high false positive rate. That said, social media can be forensically useful to provide evidence of intent after a risk has already been identified through other indicators or tips.
Language and Sentiment Analysis are still unproven to efficiently detect risk
Language analysis techniques, including sentiment analysis, have not yet been found to strongly and proactively identify insider threats. These techniques flag too much irrelevant content, which in turn creates reputational and legal risk for the organization while adding to the already high workload of insider threat analysts. More development and evidence is required to justify the investment for proactive detection of insider risk with language or sentiment analysis. That said, language and sentiment analyses can be forensically useful to provide evidence of intent after a risk has already been identified through other indicators or tips.