Skip to content

Commit cfdd9a4

Browse files
authored
Replace html syntax by markdown equivalent (TheAlgorithms#2500)
1 parent fb3f3ff commit cfdd9a4

File tree

1 file changed

+50
-88
lines changed

1 file changed

+50
-88
lines changed

DataStructures/HashMap/Readme.md

Lines changed: 50 additions & 88 deletions
Original file line numberDiff line numberDiff line change
@@ -1,103 +1,65 @@
1-
<b><h1 align=center> HASHMAP DATA STRUCTURE</h1></b>
2-
<p>A hash map organizes data so you can quickly look up values for a given key.</p>
3-
4-
## <h2>Strengths:</h2>
5-
<ul>
6-
<li><strong>Fast lookups : </strong> Lookups take O(1) time on average.</li>
7-
<li><strong>Flexible keys : </strong> Most data types can be used for keys, as long as they're hashable.</li>
8-
</ul>
9-
10-
## <h2>Weaknesses:</h2>
11-
12-
<ul>
13-
<li><strong>Slow worst-case : </strong> Lookups take O(n) time in the worst case.</li>
14-
<li><strong>Unordered : </strong> Keys aren't stored in a special order. If you're looking for the smallest key, the largest key, or all the keys in a range, you'll need to look through every key to find it.</li>
15-
<li><strong>Single-directional lookups : </strong> While you can look up the value for a given key in O(1) time, looking up the keys for a given value requires looping through the whole dataset—O(n) time.</li>
16-
<li><strong>Not cache-friendly :</strong> Many hash table implementations use linked lists, which don't put data next to each other in memory.</li>
17-
</ul>
18-
19-
## <h2>Time Complexity</h2>
20-
21-
<table border=1>
22-
<tr>
23-
<th></th>
24-
<th>AVERAGE</th>
25-
<th>WORST</th>
26-
</tr>
27-
<tr>
28-
<td>Space</td>
29-
<td>O(n)</td>
30-
<td>O(n)</td>
31-
</tr>
32-
<tr>
33-
<td>Insert</td>
34-
<td>O(1)</td>
35-
<td>O(n)</td>
36-
</tr>
37-
<tr>
38-
<td>Lookup</td>
39-
<td>O(1)</td>
40-
<td>O(n)</td>
41-
</tr>
42-
<tr>
43-
<td>Delete</td>
44-
<td>O(1)</td>
45-
<td>O(n)</td>
46-
</tr>
47-
</table>
48-
49-
## <h2> Internal Structure of HashMap</h2>
50-
51-
<p>Internally HashMap contains an array of Node and a node is represented as a class that contains 4 fields:</p>
52-
53-
<ul>
54-
<li>int hash</li>
55-
<li>K key</li>
56-
<li>V value</li>
57-
<li>Node next</li>
58-
</ul>
59-
<p>It can be seen that the node is containing a reference to its own object. So it’s a linked list. </p>
60-
61-
## <h2>Performance of HashMap</h2>
1+
# HASHMAP DATA STRUCTURE
2+
3+
A hash map organizes data so you can quickly look up values for a given key.
4+
5+
## Strengths:
6+
- **Fast lookups**: Lookups take O(1) time on average.
7+
- **Flexible keys**: Most data types can be used for keys, as long as they're hashable.
8+
9+
## Weaknesses:
10+
- **Slow worst-case**: Lookups take O(n) time in the worst case.
11+
- **Unordered**: Keys aren't stored in a special order. If you're looking for the smallest key, the largest key, or all the keys in a range, you'll need to look through every key to find it.
12+
- **Single-directional lookups**: While you can look up the value for a given key in O(1) time, looking up the keys for a given value requires looping through the whole dataset—O(n) time.
13+
- **Not cache-friendly**: Many hash table implementations use linked lists, which don't put data next to each other in memory.
14+
15+
## Time Complexity
16+
| | AVERAGE | WORST |
17+
|--------|---------|-------|
18+
| Space | O(n) | O(n) |
19+
| Insert | O(1) | O(n) |
20+
| Lookup | O(1) | O(n) |
21+
| Delete | O(1) | O(n) |
22+
23+
## Internal Structure of HashMap
24+
Internally HashMap contains an array of Node and a node is represented as a class that contains 4 fields:
25+
- int hash
26+
- K key
27+
- V value
28+
- Node next
29+
30+
It can be seen that the node is containing a reference to its own object. So it’s a linked list.
31+
32+
## Performance of HashMap
6233
Performance of HashMap depends on 2 parameters which are named as follows:
63-
<ul>
64-
<li>Initial Capacity</li>
65-
<li>Load Factor</li>
66-
</ul>
67-
<p>
68-
<strong>Initial Capacity : </strong> It is the capacity of HashMap at the time of its creation (It is the number of buckets a HashMap can hold when the HashMap is instantiated). In java, it is 2^4=16 initially, meaning it can hold 16 key-value pairs.
69-
</p>
70-
<p>
71-
<strong>Load Factor : </strong> It is the percent value of the capacity after which the capacity of Hashmap is to be increased (It is the percentage fill of buckets after which Rehashing takes place). In java, it is 0.75f by default, meaning the rehashing takes place after filling 75% of the capacity.
72-
</p>
73-
<p>
74-
<strong>Threshold : </strong> It is the product of Load Factor and Initial Capacity. In java, by default, it is (16 * 0.75 = 12). That is, Rehashing takes place after inserting 12 key-value pairs into the HashMap.
75-
</p>
76-
<p>
77-
<strong>Rehashing : </strong> It is the process of doubling the capacity of the HashMap after it reaches its Threshold. In java, HashMap continues to rehash(by default) in the following sequence – 2^4, 2^5, 2^6, 2^7, …. so on.
78-
</p>
79-
<p>
34+
- Initial Capacity
35+
- Load Factor
36+
37+
38+
**Initial Capacity**: It is the capacity of HashMap at the time of its creation (It is the number of buckets a HashMap can hold when the HashMap is instantiated). In java, it is 2^4=16 initially, meaning it can hold 16 key-value pairs.
39+
40+
**Load Factor**: It is the percent value of the capacity after which the capacity of Hashmap is to be increased (It is the percentage fill of buckets after which Rehashing takes place). In java, it is 0.75f by default, meaning the rehashing takes place after filling 75% of the capacity.
41+
42+
**Threshold**: It is the product of Load Factor and Initial Capacity. In java, by default, it is (16 * 0.75 = 12). That is, Rehashing takes place after inserting 12 key-value pairs into the HashMap.
43+
44+
**Rehashing** : It is the process of doubling the capacity of the HashMap after it reaches its Threshold. In java, HashMap continues to rehash(by default) in the following sequence – 2^4, 2^5, 2^6, 2^7, …. so on.
45+
8046
If the initial capacity is kept higher then rehashing will never be done. But by keeping it higher increases the time complexity of iteration. So it should be chosen very cleverly to increase performance. The expected number of values should be taken into account to set the initial capacity. The most generally preferred load factor value is 0.75 which provides a good deal between time and space costs. The load factor’s value varies between 0 and 1.
81-
</p>
8247

8348
```
8449
Note: From Java 8 onward, Java has started using Self Balancing BST instead of a linked list for chaining.
8550
The advantage of self-balancing bst is, we get the worst case (when every key maps to the same slot) search time is O(Log n).
8651
```
52+
8753
Java has two hash table classes: HashTable and HashMap. In general, you should use a HashMap.
8854

8955
While both classes use keys to look up values, there are some important differences, including:
9056

91-
<ul>
92-
<li>A HashTable doesn't allow null keys or values; a HashMap does.</li>
93-
<li>A HashTable is synchronized to prevent multiple threads from accessing it at once; a HashMap isn't.</li>
94-
</ul>
57+
- A HashTable doesn't allow null keys or values; a HashMap does.
58+
- A HashTable is synchronized to prevent multiple threads from accessing it at once; a HashMap isn't.
9559

96-
## <h2>When Hash Map operations cost O(n) time ? </h2>
60+
## When Hash Map operations cost O(n) time?
9761

98-
<p>
99-
<strong>Hash collisions : </strong> If all our keys caused hash collisions, we'd be at risk of having to walk through all of our values for a single lookup (in the example above, we'd have one big linked list). This is unlikely, but it could happen. That's the worst case.
62+
**Hash collisions**: If all our keys caused hash collisions, we'd be at risk of having to walk through all of our values for a single lookup (in the example above, we'd have one big linked list). This is unlikely, but it could happen. That's the worst case.
10063

101-
<strong>Dynamic array resizing : </strong> Suppose we keep adding more items to our hash map. As the number of keys and values in our hash map exceeds the number of indices in the underlying array, hash collisions become inevitable. To mitigate this, we could expand our underlying array whenever things start to get crowded. That requires allocating a larger array and rehashing all of our existing keys to figure out their new position—O(n) time.
64+
**Dynamic array resizing**: Suppose we keep adding more items to our hash map. As the number of keys and values in our hash map exceeds the number of indices in the underlying array, hash collisions become inevitable. To mitigate this, we could expand our underlying array whenever things start to get crowded. That requires allocating a larger array and rehashing all of our existing keys to figure out their new position—O(n) time.
10265

103-
</p>

0 commit comments

Comments
 (0)