Deception, Risk, & Deterrence

Santos: New "Lying" Detection Model

ISTS_Thayer_GeneSantos.jpg

Professor Santos
  • Professor Eugene Santos and co-author present new model for detecting deception, which examines speaker's previous patterns of reasoning to capture intentional deception in recently published paper in Journal for Experimental and Theoretical Artificial Intelligence.

Deception intent and detection (santos)

Deception as a Cognitive Process

  • We propose that the act of deceiving is to reason by supposing the truth of deceivers’ targeted arguments, but the truth of the targeted arguments is not actually believed by the deceivers.

Fundamental Discrepancies in Deception

  • Discrepancies in arguments that deceivers are reluctant to believe but truth tellers embrace can be expected.
  • Discrepancies in arguments that are manipulated by deceivers can be expected.

Approach

Computational Model

  • Correlation Network connects acquaintances who can anticipate each other’s arguments. It predicts an agent’s belief according to neighbors who can expect each other. 
  • Consensus Network connects people who agree with each other. It compares the deceiver with the truth tellers.

Dataset

  • Simulation data: a lawsuit case from a tv episode
  • Survey data: one hundred test subjects were asked to imagine that they were taking part in a debate on controversial topics.

Achievements

  • We proposed in this work two fundamental discrepancies in deceptive communications.
  • We designed a generic model of deception detection, in which, agents are correlated with others to expect each other’s consistency in beliefs and consenting agents are compared with each other to evaluate the truthfulness of beliefs.
  • The main contribution of this work is to suggest a new direction in which deeper information about the intent of deceivers is carefully mined and analyzed based on their cognitive process.

Adaptive Cyber Defense (ACD)

G. Cybenko (Dartmouth) with S. Jajodia (George Mason), P. Liu (Penn State) and M. Wellman (University of Michigan)

Objectives

  • Make computer systems and their defenses “moving targets” in principled ways using control and game theory
  • Motivation: Existing computer and network systems change slowly over time (patches, updates and versions, eg) allowing adversaries to reverse engineer systems, identify their vulnerabilities and craft exploits against them at a much faster rate than defenders operate

Key Science Methods & Advances

  • Development of new methods for randomizing and diversifying systems, networks and applications
  • Control theoretic techniques for assessing tradeoffs between security, availability, manageability
  • Computational game theoretic techniques for assessing tradeoffs between security, availability, manageability
  • Techniques for high speed, large volume system situational awareness as part of the control and game state estimation

Results & Impacts

  • New algorithms and analysis for sketching solutions to large graph problems useful for defensive sensors
  • Novel botnet design and adversarial planning methods
  • Management of human resources for vulnerability management
  • K. Farris, A. Shah,  G. Cybenko R. Ganesan, S. Jajodia.  VULCON - A System for Vulnerability Prioritization, Mitigation, and Management.  To appear in ACM Transactions on Privacy and Security (ACM TOPS) 2018.
  • P. Sweeney and G. Cybenko. An analytic approach to cyber adversarial dynamics.” Technologies for Homeland Security and Homeland Defense XI, vol. 8359, p. 835906. International Society for Optics and Photonics, 2012.

Probabilistic Logic of Deception

V.S. Subrahmanian (Dartmouth) with F. Pierazzi (University of London), N. Park (University of North Carolina), E. Serra (Boise State), S. Jajodia (George Mason)

Objectives

  • Attackers frequently scan a network and map out vCan we therefore:
  • Develop methods to vulnerabilities in it before targeting vulnerable nodes for cyber-attack.
  • Objectives: understand the set of possible worlds within an attacker’s mind and
  • Provide fake results to scan requests so as to lead attackers “away” from important nodes/assets

Key Science Methods & Advances

  • Attacker’s probabilistic state is a pdf over the possible states of the network, the capturing attacker’s beliefs.
  • Attacker seeks to maximize the expected damage caused by his actions (scan for vulnerabilities, exploit vulnerability).
  •  When the defender provides a response to a scan request, that response has an associated expected damage.
  • Defender’s goal is to provide a response that minimizes the expected damage.
  • Formalized as logic + game theory plus a Fast-PLD algorithm to achieve the defender’s goal.

Results & Impacts

  • Attacker’s goal of maximize expected damage is NP-hard.
  • For the defender, finding a response to a scan query that minimizes expected damage is also NP-hard.
  • We propose both an exact and a heuristic algorithm that manipulate the attacker’s belief state to the best extent possible.

Hybrid Adversarial Defense: Merging Honeypots and Traditional Security Methods

V.S. Subrahmanian (Dartmouth), T. Chakraborty (IITD), S. Jajodia (George Mason), N. Parl (University of North Carolina), A. Pugliese (University of Calabria), E. Serra (Boise State)

Objectives

  • Past work on honeypot placement assumes that the only security measures being used are honeypots. But this is wrong.
  • Given m honeypots and m traditional security software (e.g. firewalls, IDSs):
    • where should the two types of security models be deployed to maximize security?
    • How do we simultaneously patch and deactivate buggy software?

Key Science Methods & Advances

  • Developed a Stackelberg-game model for the defender to model the attacker’s behavior.
  • Developed Attacker Belief Evolution Trees that enable the defender to model the attacker’s beliefs.
  • Showed that the problem of simultaneously placing both honeypots and traditional defenses is in EXPTIME and
  • is NP-hard.
  • Developed both the H_Exact and H_Greedy algorithms to find the best solution and a fast, but suboptimal solution, respectively.
  • Implemented both

Results & Impacts

  • H_Greedy is guaranteed to produce an approximate solution in polynomial time.
  • Experiments show that H_Greedy works very well, producing solutions that protect the network 93-98% as well as the optimal solution would, while usually taking less than half the time to run.

FORGE: Fake Online Repository Generation Engine

V.S. Subrahmanian (Dartmouth) with T. Chakraborty (IIIT-Delhi), S. Jajodia (George Mason), J. Katz (UMD), A. Picariello & G. Sperli (U. Napoli

Objectives

  • Cyberattacks intended to steal intellectual property may often go undetected for 8-12 months.
  • Goal of FORGE is to develop methods to impose costs on attackers who steal IP even when we do not know the identity of the attacker and/or do not know if/when an enterprise has been compromised.
  • Key idea: Automatically generate fake versions of documents so as to maximize believability, while being sufficiently different from the original. Use message authentication codes (MACs) so as to distinguish the correct document from the fakes.

Key Science Methods & Advances

  • Develop a multi-layer graph (MLG) representation of the concepts within a document: syntactic and semantic layers.
  • Define novel concept of meta-centrality in which a centrality measure in an ordinary graph induces a meta-centrality measure on MLGs.
  • Show that the problem of deciding which concepts in the original document should be replaced, subject to various constraints, is NP-hard.
  • Used heuristic knapsack solver to solve this optimization problem.
  • Developed prototype FORGE system and experimentally showed that the system generates convincing fakes

Results & Impacts

  • We show that FORGE generates fake that have a deception factor (% of times a human identifies a fake document as real) over 95%.
  • FORGE also has high believability (% of fake documents believed to be real by humans).
  • FORGE system tested on an extensive set of patent documents

ELF-based Access Control: Mitigating 0-days

S. Bratus, J. Reeves, S.W. Smith, P. Anantharaman, J.P. Brady, I.R. Jenkins (Students) with NARF

Objectives

  • Once compromised, the OS process lets any of its code touch any of its data

Key Idea

  • Use load format to recapture programmer intent as access policy
  • Instrument kernel MMU for enforcement
  • Allows for new policies as well as default policies for existing code

Key Science Methods & Advances

  • The “unforgetful” loader
  • Policy authoring tools
  • Implementations for Intel and ARM
  • Integration with kernel-hardening methods like grsecurity/PaX
  • Demonstrate the use of ELF-based access control on:
    • DNP3 implementations
    • OpenSSH
    • Spectre

Results & Impacts

  • Falcon Darkstar Momot, Sergey Bratus, Sven M. Hallberg, and Meredith L. Patterson. 2016. The Seven Turrets of Babel: A Taxonomy of LangSec Errors and How to Expunge Them. In IEEE Cybersecurity Development. 45–52.
  • Ira Ray Jenkins, Sergey Bratus, Sean Smith, and Maxwell Koo. 2018. Reinventing the Privilege Drop: How Principled Preservation of Programmer Intent Would Prevent Security Bugs. In HoTSoS ’18: Hot Topics in the Science of Security: Symposium and Bootcamp, April 10–11, 2018
Create a Mobile Website
View Site in Mobile | Classic
Share by: