CSCE 465 Lecture 4

From Notes
Jump to navigation Jump to search

« previous | Thursday, January 24, 2013 | next »


Lecture Slides


Security

Policy

What do you want to achieve?

States what is and is not allowed: defines security for site/system/etc.

Mechanisms enforce policies

Composition of policies

  • representing policies in some sort of representation
  • Conflicts may create security vulnerabilities


Mechanisms

How are you going to implement security?

Goals:

  • Prevention: prevent attackers from violating security policy
  • Detection: detect attackers' violation of security policy
  • Recovery: stop attack and assess/repair damage; continue to work correctly even if attack succeeds


Defense in depth: few attacks may thwart first level, fewer slip past detection, etc.

Prevention is fundamental, but sometimes detection is only option

  • accountability in proper use of authorized privileges
  • modification of messages in a network

Defense and protection often made available through services accessible via API or integrated interfaces. For example,

  • Confidentiality: encryption
  • Authentication: password login
  • Integrity: signing,
  • Non-repudiation [1]:
  • Access control: who is allowed to access what resources
  • Monitor and response: watchdogs


Trust and Assumptions

Underlie all aspects of security

policies
Unambiguously partition system states (no conflicts)
correctly capture security requirements (no loopholes)
mechanisms
enforce policies correctly
support mechanisms work correctly


Assurance

How well will security work?

Everyone wants high assurance, but high assurance implies high cost

tradeoff needed between

  • security
  • functionality
  • ease of use (e.g. password policies: harder for humans to break, but easier for computers to break)

Assuring assurance:

  • Specification
  • Design: how system will meet specifications
  • Implementation:programs/systems that carry out design

Security by Obscurity

if we hide the inner workings of a system it will be secure

  • Less applicable in emerging world of vendor-independent open standards
  • Less applicable in world of widespread computer knowledge and expertise

Obscurity doesn't hurt, but it shouldn't be the only security

Security by Legislation

if we instruct our users on how to behave we can secure our systems

For example: Users should not

  • share passwords
  • write down passwords
  • type password when someone is looking over their shoulder

User awareness is important, but cannot be trusted. Users are stupid (and that's why we have Windows 8[2])


Operational Issues

  • Cost-benefit analysis: Is it cheaper to prevent or recover?
  • Risk analysis: Should we protect this? how much?
  • Laws and Customs: are desired security measures illegal? will people abide?

Human Issues

  • Organizational problems
  • People problems
  • human users are the weakest link


Security Lifecycle

  1. Threats
  2. Policy
  3. Specification
  4. Design
  5. Implementation
  6. Operation


Optional Honor Project

  • form a team (1-3)
  • design, implement, measure, analyze, evaluate an interesting idea on some security topic
  • outcome: new attack, defense, system, tool, service, or understanding of something
  • don't need to do 5th homework or take final exam
  • deadline to apply is Feb. 14th
  • monthly update to prof. and TA
  • grade based on novelty, depth, correctness, clarity of presentation, and effort


Footnotes

  1. non-repudiation: irrefutable evidence that party is indeed the sender or a receiver of certain information
  2. Now who is more stupid: the users that don't learn how to use a computer or the developers who build software tailored for the users at that level?