I like this phrasing on how to remember what logs do

log_{10} 100 is like asking “how many 10s do we multiply to get 100?”

arrays are a contiguous spaces of memory and one major downside is the need to resize them because you have to access a for a finite amount of space up front and if all that space gets taken up then you need to find a new contiguous space of memory and copy everything over.

linked lists are not contiguous, instead each element in the list holds a reference to the next node which allows the elements to be stored anywhere in memory. This makes inserts and deletes constant time because no matter how large the lists takes it’s going to take the same amount of time.

The downside with linked lists is searching through them stakes linear time because in the worse case you would have to go through each node to find the element you are looking for. Whereas arrays are constant time to access any element in the array because of its stored contiguously in memory you can effectively “jump” straight to that address no matter how large the array

Table 1: Summary: Arrays vs Linked Lists Time Complexity

Data Structure

Inserts

Reads

Deletes

Notes

Arrays

O(N)

O(1)

O(N)

All elements need to be shifted so there are no gaps on deletes

Linked Lists

O(1) (at head)

O(N)

O(1)

Common to keep pointers to head and tail, it’s only here operations are O(1)

random access: jump directly to element

sequential access: reads elements one by one

arrays are more popular than linked lists because many workloads require random access

I really liked the example of the Farmer trying to divide his rectangular plot of land into the largest square possible.

def farmer_land(h: int, w: int) ->tuple[int, int]:if h ==0or w ==0: side =max(h, w)return (side, side)if h >= w:return farmer_land(h % w, w)else:return farmer_land(h, w % h)if__name__=="__main__": h =640 w =1680print(farmer_land(h, w))

(80, 80)

I like the mental model of thinking of merge and quick sort as the height of the call stack as levels or amount of times it gets recursively called and each level takes O(N) time

❓

How can resizing a hash table be O(1) time complexity?

Shortest Path problem is best solved with breadth first search

Weighted Graph e.g. fastest path where each node has a number attached to it called a “weight” (e.g. time, money) then use the Dijkstra’s Algorithm

Good examples where greedy algorithms work and don’t work

Class Scheduling Problem: You are trying to figure out the amount classes in you can schedule in one class room. The solution is surprisingly simple, first take the classes the ends the soonest. Next, take the class the starts after the first class ends and ends the soonest. Repeat.

Knapsack problem: You are a robber trying to maximize the value of items you steal but your knapsack only holds 35lbs. Lets say there are 3 items

item 1: $3,000 35lbs

item 2: $2,500 25lbs

item 3: $1,000 10lbs

If the robber takes the highest value item first, they fill up their knapsack right away for a total of $3,000. If they were to take the other two items they could have had a total of $3,500 items instead of $3,000

Traveling Salesmen and Set covering problems are NP-Complete. This is because it requires calculating all possible permutations

Review

This book was a fun read. It provides an easy introduction to many common data structures and algorithms. I am a visual learner, so some of the examples provided made the algorithms really click for me. It reminds me how I visual SQL operations when I write them such as joins, window functions etc.

I already had some basic DSA knowledge coming into this book but still felt I got something out of it. I think this book would be great for someone first learning DSA.

I recommend trying to implement the algorithms yourself before the author reveals their solution. Even if you don’t know what the syntax might be at least write out what each step would need to do to implement the algorithm.