Readme för inlupp 1! This folder contains a hash table data structure including some utility functions, a linked list data structure used by the hash table, an iterator for the linked list, a test file (unit_tests) and a program freq_count that utilizes the hash table by counting words in text files.
Compile both the linked list and hash table data structures with the following command:
make bothIf you wish to only compile the linked list data structure use the following command:
make linkedRunning tests is done by Makefile with the following command:
make testsFor checking memory leaks with valgrind:
make memtestsLine coverage in hash_table.c was 90.40% of 177 with a branch coverage of 93.94% of 66 Line coverage in linked_list.c was 90.07% of 151 with a branch coverage of 89.66% of 58
There is an added bool argument called successful in some functions that determine if the operation failed. This added argument can be seen in the function signatures. In other functions assert is used for handling failure.
The top three functions for all input is strcmp (36.21 %) (library function), ioopm_hash_table_has_key (26.71 %) and string_eq (22.77 %). For each input it was a similar result, except that strcmp was used less frequently in small.txt. One trend is that the files with more unique words did not use strcmp as much.
We expected find_previous_entry_for_key to take more time because it is used often (in lookup, remove and insert). Even if it is used often, strcmp is also used frequently and is less time efficient. They are both O(n) but we think that find_previous_entry_for_key is used less frequently and to a lesser extent.
It’s difficult to make our program go “faster” because comparing strings has a time complexity of O(n) and it is used often in the program (to compare the frequency of strings). Has key was also used a lot. Maybe the process of searching through each bucket could be made quicker if we used for example binary search trees instead of linked lists.