What mechanism does Java's ConcurrentHashMap employ to allow for concurrent reads and updates while maintaining thread safety?
Read-write locks separating readers and writers
Lock-free data structures using atomic operations
A single global lock for all operations
Fine-grained locking at the bucket level
In cuckoo hashing, how many hash functions are typically used?
3
1
2
It depends on the size of the hash table.
In a hash table using double hashing, the second hash function is used to:
Generate a new key if a collision occurs.
Determine the step size for probing in case of a collision.
Calculate the size of the hash table.
Determine the initial index to store the key.
Why is it generally recommended to avoid using mutable objects as keys in hash tables?
Hash tables cannot store mutable objects as keys; only immutable objects are allowed.
Mutable keys make the implementation of the hash table significantly more complex.
Using mutable keys increases the memory overhead of the hash table.
Mutable keys can lead to inconsistent state if their values are modified after being inserted into the hash table.
In a hashmap implementation using open addressing with linear probing, what is the worst-case time complexity for searching for a key if the hash table is nearly full?
O(log n)
O(1)
O(n log n)
O(n)
What security risk arises from storing sensitive data like passwords directly in a hashmap, even when hashed?
Hashmaps are inherently less secure than other data structures for storing passwords.
Storing any data in a hashmap increases the risk of SQL injection attacks.
An attacker gaining access to the hashmap could retrieve the plaintext passwords.
Hash collisions could allow attackers to bypass authentication.
How does using a cryptographic hash function with a random salt improve the security of a hashmap storing user credentials?
It encrypts the data stored in the hashmap, making it unreadable without the decryption key.
It makes it significantly harder for attackers to perform rainbow table attacks.
It eliminates the possibility of hash collisions.
It prevents unauthorized users from accessing the hashmap's keys.
How can a hash flooding attack impact the performance of a web server using a hashmap to store session data?
It can improve the efficiency of the hashmap by distributing data more evenly.
It has no impact on performance, as hash flooding attacks only target data integrity.
It can lead to increased memory usage and faster response times.
It can cause a denial-of-service by forcing the server to handle a large number of collisions.
Which of these statements best describes the advantage of using a perfect hash function over a regular hash function?
It eliminates the need for collision handling.
It reduces the memory used by the hash table.
It guarantees constant-time search, insertion, and deletion in the worst case.
It allows for faster key insertions.
In a web server implemented using a hashmap to store cached web pages, which collision resolution strategy is generally preferred for its performance in handling a high volume of concurrent requests?
Separate Chaining with balanced binary search trees
Separate Chaining with linked lists
Open Addressing with linear probing
Double Hashing