Skip to content

Commit 6d87a59

Browse files
committed
added more files
1 parent 86ac1dc commit 6d87a59

File tree

295 files changed

+34967
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

295 files changed

+34967
-0
lines changed
Lines changed: 178 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,178 @@
1+
[SOUND]
2+
Computer security,
3+
more recently known as cyber security,
4+
is an attribute of a computer system.
5+
The primary attribute that system
6+
builders focus on is correctness.
7+
They want their systems to behave as
8+
specified under expected circumstances.
9+
If I'm developing a banking website,
10+
I'm concerned that when a client specifies
11+
a funds transfer of say $100 from one of
12+
her accounts, then $100 is indeed
13+
transferred if funds are available.
14+
If I'm developing a word processor, I'm
15+
concerned that when a file is saved and
16+
reloaded, they do get back my
17+
data from where I left off.
18+
And so on.
19+
A secure computer system is
20+
one that prevents specific
21+
undesirable behaviors under
22+
wide ranging circumstances.
23+
While correctness is largely
24+
about what a system should do,
25+
security is about what it should not do.
26+
Even when there is an adversary who's
27+
actively and maliciously trying to
28+
circumvent any protective measures
29+
that you might put in place.
30+
There are three classic security
31+
properties that systems usually attempt
32+
to satisfy.
33+
Violations of these properties
34+
constitute undesirable behavior.
35+
These are broad properties.
36+
Different systems will have
37+
specific instances of some of
38+
these properties depending
39+
on what the system does.
40+
The first property is confidentiality.
41+
If an attacker is able to manipulate
42+
the system so as to steal resources or
43+
information such as personal attributes or
44+
corporate secrets,
45+
then he's violated confidentiality.
46+
The second property is integrity.
47+
If an attacker is able to modify or
48+
corrupt information keep by a system, or
49+
is able to misuse
50+
the systems functionality,
51+
then he's violated the systems integrity.
52+
Example violations include the destruction
53+
of records, the modifications of
54+
system logs, the installation of unwanted
55+
software like spyware, and more.
56+
The final property is availability.
57+
If an attacker compromises a system so
58+
as to deny service to legitimate users,
59+
for example, to purchase products or
60+
to access bank funds, then the attacker
61+
has violated the system's availability.
62+
Few systems today are completely secure,
63+
as evidenced by the constant stream of
64+
reported security breaches that
65+
you may have seen in the news.
66+
In 2011, for example,
67+
the RSA corporation was breached.
68+
I'll say more about how in a moment.
69+
The adversary was able to steal sensitive
70+
tokens related to RSA's SecureID devices.
71+
These tokens were then used to break
72+
into companies that use SecureID.
73+
In late 2013, Adobe corporation was
74+
breached, and both source code and
75+
customer records were stolen.
76+
At around the same time, attackers
77+
compromised Targets point of sale
78+
terminals, and were able to steal around
79+
40 million credit and debit card numbers.
80+
And these are just a few
81+
high profile examples.
82+
How did the attackers
83+
breach these systems?
84+
Many breaches begin with
85+
the exploitation of a vulnerability in
86+
the system in question.
87+
A vulnerability is a defect that
88+
an adversary can exploit through carefully
89+
crafted interactions to get
90+
the system to behave insecurely.
91+
In general,
92+
a defect is a problem in the design or
93+
implementation of the system such that
94+
it fails to meet its requirements.
95+
In other words,
96+
it fails to behave correctly.
97+
A flaw is a defect in the design while
98+
a bug is a defect in the implementation.
99+
A vulnerability is a defect that affects
100+
security relevant behavior rather than
101+
simply correctness.
102+
As an example,
103+
consider the RSA 2011 breach.
104+
This breach hinged on a defect in
105+
the implementation of Adobe Flash player.
106+
Where the flash player should benignly
107+
reject malformed input files,
108+
the defect instead allowed the attacker
109+
to provide a carefully crafted input
110+
file that could manipulate the program
111+
to run code of the attacker's choice.
112+
This input file could be embedded
113+
in a Microsoft Excel spreadsheet so
114+
that flash player was automatically
115+
invoked when the spreadsheet was opened.
116+
In the actual attack,
117+
the adversary sent such a spreadsheet
118+
to an executive at the company.
119+
The email masqueraded as
120+
being from a colleague so
121+
the executive was beguiled
122+
into opening that file.
123+
This sort of faked email is called a spear
124+
phishing attack, and it's quite common.
125+
One the spreadsheet was opened
126+
the attacker was able to silently install
127+
malware on the executive's machine,
128+
and from there, carry out the attack.
129+
This example highlights an important
130+
distinction between viewing
131+
software through the lens of correctness
132+
and through the lens of security.
133+
From the point of view of correctness,
134+
the flash vulnerability is just a bug,
135+
and all non trivial software has bugs.
136+
Companies admit to shipping their software
137+
with known bugs because it will be
138+
too expensive to fix them all.
139+
Instead developers focus on bugs that
140+
would arise in typical situations.
141+
The bugs that are left, like the flash
142+
vulnerability, come up rarely and
143+
users are used to dealing
144+
with them when they do.
145+
If doing something causes their software
146+
to crash, users quickly learn that,
147+
that something is not something to do and
148+
they work around it.
149+
Eventually, a bug is so burdensome on
150+
many users that a company will fix it.
151+
Now, on the other hand, from the point of
152+
view of security, it is not sufficient to
153+
judge the importance of a bug only
154+
with respect to typical use cases.
155+
Developers must consider
156+
atypical misuse cases,
157+
because this is exactly
158+
what the adversary will do.
159+
Whereas a normal user might
160+
trip across a bug and
161+
cause the software to crash, an adversary
162+
will attempt to reproduce that crash,
163+
understand why it is happening, and
164+
then manipulate the interaction to
165+
turn that crash into an exploitation.
166+
In short, to ensure that a system
167+
meets its security goals,
168+
we must strive to eliminate bugs and
169+
design flaws.
170+
We must think carefully about those
171+
properties that must always hold no
172+
matter what, and ensure our design and
173+
implementation does not contain defects
174+
that would compromise security.
175+
We must also design the system so
176+
that any defects that do inevitably
177+
remain are harder to exploit.
178+
Lines changed: 219 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,219 @@
1+
[SOUND]
2+
So
3+
far we have talked about computer security
4+
generally, but what is software security?
5+
The subject of this class, in particular.
6+
Software security is a branch of
7+
computer security that focuses on
8+
the secure design and
9+
implementation of software.
10+
In other words, it focuses on avoiding
11+
software vulnerabilities, flaws and bugs.
12+
While software security overlaps with and
13+
complements other areas of
14+
computer security, it is distinguished
15+
by its focus on a secure system's code.
16+
This focus makes it a white box approach,
17+
where other approaches are more black box.
18+
They tend to ignore
19+
the software's internals.
20+
Why is software security's
21+
focus on the code important?
22+
The short answer is that software
23+
defects are often the root cause of
24+
security problems, and software security
25+
aims to address these defects directly.
26+
Other forms of security tend to ignore the
27+
software and build up defenses around it.
28+
Just like the walls of a castle,
29+
these defenses are important and
30+
work up to a point.
31+
But when software defects remain,
32+
cleaver attackers often find
33+
a way to bypass those walls.
34+
We'll now consider a few standard methods
35+
for security enforcement and see how their
36+
black box nature presents limitations that
37+
software security techniques can address.
38+
Our first example is security enforcement
39+
by the operating system or OS.
40+
When computer security was growing
41+
up as a field in the early 1970s,
42+
the operating system was the focus.
43+
To the operating system, the code of a
44+
running program is not what is important.
45+
Instead, the OS cares about
46+
what the program does, that is,
47+
its actions as it executes.
48+
These actions, called system calls,
49+
include reading or
50+
writing files, sending network packets and
51+
running new programs.
52+
The operating system enforces security
53+
policies that limit the scope of
54+
system calls.
55+
For example, the OS can ensure
56+
that Alice's programs cannot
57+
access Bob's files.
58+
Or that untrusted user programs
59+
cannot set up trusted services on
60+
standard network ports.
61+
The operating system's security
62+
is critically important, but
63+
it is not always sufficient.
64+
In particular,
65+
some of the security relevant actions of
66+
a program are too fine-grained
67+
to be mediated as system calls.
68+
And so
69+
the software itself needs to be involved.
70+
For example, a database management
71+
system or DMBS is a server that manages
72+
data whose security policy is specific to
73+
an application that is using that data.
74+
For an online store, for example,
75+
a database may contain security sensitive
76+
account information for customers and
77+
vendors alongside other records such as
78+
product descriptions which are not
79+
security sensitive at all.
80+
It is up to the DBMS to implement security
81+
policies that control access to this data,
82+
not the OS.
83+
Operating systems are also unable
84+
to enforce certain kinds of
85+
security policies.
86+
Operating systems typically act as an
87+
execution monitor which determines whether
88+
to allow or disallow a program action
89+
based on current execution context and
90+
the program's prior actions.
91+
However, there are some kinds of policies,
92+
such as information flow policies, that
93+
can not be, that simply cannot be enforced
94+
precisely without consideration for
95+
potential future actions,
96+
or even nonactions.
97+
Software level mechanisms can be
98+
brought to bear in these cases,
99+
perhaps in cooperation with the OS.
100+
We will consider information flow policies
101+
in more depth later in this class.
102+
Another popular sort of security
103+
enforcement mechanism is
104+
a network monitor like a firewall or
105+
intrusion detection system or IDS.
106+
A firewall generally works
107+
by blocking connections and
108+
packets from entering the network.
109+
For example, a firewall may
110+
block all attempts to connect to
111+
network servers except those
112+
listening on designated ports.
113+
Such as TCP port 80,
114+
the standard port for web servers.
115+
Firewalls are particularly useful
116+
when there is software running on
117+
the local network that is only
118+
intended to be used by local users.
119+
An intrusion detection system
120+
provides more fine-grained control
121+
by examining the contents of network
122+
packets, looking for suspicious patterns.
123+
For example,
124+
to exploit a vulnerable server,
125+
an attacker may send a carefully crafted
126+
input to that server as a network packet.
127+
An IDS can look for
128+
such packets and filter them out to
129+
prevent the attack from taking place.
130+
Firewalls and
131+
IDSs are good at reducing the avenues for
132+
attack and
133+
preventing known vectors of attack.
134+
But both devices can be worked around.
135+
For example, most firewalls
136+
will allow traffic on port 80,
137+
because they assume it
138+
is benign web traffic.
139+
But there is no guarantee that
140+
port 80 only runs web servers,
141+
even if that's usually the case.
142+
In fact, developers have invented SOAP,
143+
which stands for simple object
144+
access protocol, to work around firewall
145+
blocking on ports other than port 80.
146+
SOAP permits more general
147+
purpose message exchanges, but
148+
encodes them using the web protocol.
149+
Now, IDS patterns
150+
are more fine-grained and
151+
are more able to look at the details
152+
of what's going on than our firewalls.
153+
But IDSs can be fooled as well by
154+
inconsequential differences in
155+
attack patterns.
156+
Attempts to fill those gaps by using
157+
more sophisticated filters can
158+
slow down traffic, and attackers can
159+
exploit such slow downs by sending lots of
160+
problematic traffic, creating a denial of
161+
service, that is, a loss of availability.
162+
Finally, consider anti-virus scanners.
163+
These are tools that examine
164+
the contents of files, emails, and
165+
other traffic on a host machine,
166+
looking for signs of attack.
167+
These are quite similar to IDSs,
168+
but they operate on files and
169+
have less stringent performance
170+
requirements as a result.
171+
But they too can often be bypassed by
172+
making small changes to attack vectors.
173+
Now we conclude our comparison
174+
of software security to
175+
black box security with an example,
176+
the Heartbleed bug.
177+
Heartbleed is the name given
178+
to a bug in version 1.0.1 of
179+
the OpenSSL implementation of the
180+
transport layer security protocol or TLS.
181+
This bug can be exploited by getting
182+
the buggy server running OpenSSL to
183+
return portions of its memory.
184+
The bug is an example
185+
of a buffer overflow,
186+
which we will consider in
187+
detail later in this course.
188+
Let's look at black box
189+
security mechanisms, and
190+
how they fare against Heartbleed.
191+
Operating system enforcement and
192+
anti-virus scanners can do little to help.
193+
For the former,
194+
an exploit that steals data does so
195+
using the privileges normally
196+
granted to a TLS-enabled server.
197+
So the OS can see nothing wrong.
198+
For the latter, the exploit occurs
199+
while the TLS server is executing,
200+
therefore leaving no obvious
201+
traces in the file system.
202+
Basic packet filters used by IDSs can
203+
look for signs of exploit packets.
204+
The FBI issued signatures for the snort
205+
IDS soon after Heartbleed was announced.
206+
These signatures should work against basic
207+
exploits, but exploits may be able to
208+
apply variations in packet format such
209+
as chunking to bypass the signatures.
210+
In any case, the ramifications of
211+
a successful attack are not easily
212+
determined, because any exfiltrated data
213+
will go back on the encrypted channel.
214+
Now, compared to these, software security
215+
methods would aim to go straight to
216+
the source of the problem by preventing or
217+
more completely mitigating
218+
the defect in the software.
219+

0 commit comments

Comments
 (0)