Skip to content
ashique

uninformed search algorithms

ai, notes, omscs4 min read

When we have no knowledge of how close we are getting to our goal state, we just try our best. And what does that mean? The cost of our actions is the same. Now when we do have the means to compute the cost, such as in the best first search algorithm, we can do better.

Nevertheless, the lack of knowledge shall not deter us from searching, and let’s learn what we can do.

Breadth-First Search

When all our actions cost the same, an appropriate strategy is perform a breadth-first search, where we expand the root, then all their successors, and then their successors and so on. This is systematic strategy and therefore complete, guaranteeing that we will visit every possible state even in an infinite state space.
A FIFO queue for our frontiers is the right choice, and will give us the correct order of nodes to explore: new nodes will go to the back, older nodes are explored first as they are always shallower. Reached can be a set of nodes, because once we’ve reached a state, we can never find a better path to the state. This also means we can do an early goaltest compared to the late one of best-first.

Cost-optimality and Completeness
As it generates nodes for depth d, it has already generated all the nodes at depth d-1, so if there existed a solution it would be found. Thus it is optimal for problems with same action-cost, not for variable action costs.

Space and time complexity
Let us imagine an uniform search tree where the root genereates b node and each of which generates b more nodes. The total number of nodes at any given depth is now:

$$ 1 + b + b^2 + b^3 + ... + b^d = O(b^d) $$

Since all the nodes remain in memory, both the space and time complexity is thus $O(b^d)$ .

Pseudocode:

1function breadth-first-search(problem) return solution | failure
2 node <- Node(problem.Initial)
3 if goaltest(node.State) return node
4
5 frontiers <- queue().push(node) // initialized FIFO queue with root
6 reached <- {problem.Initial} // initialize visited states
7
8 while frontier.any():
9 node <- frontiers.pop()
10
11 for each child in expand(problem,node)
12 s <- child.State
13 if goaltest(s) return child
14 if s is not in reached
15 reached.add(s)
16 frontiers.add(child)
17
18 return failure

Uniform-Cost Search (Djikstra’s Algorithm)

When the action costs are different, spreading out in waves of uniform depth like BFS is no longer the optimal strategy, as some paths would be cheaper than others and more desriable to focus our search efforts there. This is the idea behind the uniform-cost search. This can be implemented as a Best-First search with the path cost as the evaluation fucntion.

Cost-optimality and Completeness
UFS is complete and cost-optimal as the first solution it finds will have a cost that is at least as low as the cost of any other node on the frontier. Even in an infitinite state stpace, all paths are considered systematically in the order of increasing costs, never going down a single infinite cost.

Space and time complexity
The complexity can be characterized in terms of $C^*$, the cost of the optimal solution and $ε$, a lower bound on the cost of each action with $\epsilon > 0$. In these terms, the space and time complexity becomes:

$$ O(b\raisebox{0.25em}{$[1+C^*/ε]$}) $$

The worst case can be much greater than $b^d$. This is the downside of prioritizing only immdiate low cost paths over high cost paths, when the optimal solution lies through high-cost paths. When all action costs are equal, the complexity can be approximated to be the same as that of breadth-first search.

Depth-First Search

This algorithms always seeks to expand the deepest node in the frontier first. It may be implemented as a version of the Best-Fist search with the evaluation function being the negative of the depth. It is mostly implemented as a tree search without storing the reached states, and returning the first solution found.

Cost-optimality and Completeness
DFS is not cost-optimal. The algorithm returns the first solution it finds as it goes down depths of the search tree. This means the optimal solution may not be returned.

In an infinite state space, the algorithm may run forever, exploring a depth of the search tree which do not have the solution. Thus it is incomplete for an infinite state space.

It can be easily observed to be complete for finite state-spaces. For acyclic graphs, it may end up expanding the same state space multiple times, but will eventully converge, since it explores the space systematically. For cyclic graphs, it can get stuck in an infinite loop.

Space and time complexity
The space complexity can be characterized in terms of the branching factor and the maximum depth of the tree $O(bm)$ and the time complexity is $O(b^m)$ where n is the number of states.

This is a massive improvement over the breadth-first search for memory consumption, which makes it a suitable algorithm for many problems despite its downsides.

Depth-limited and iterative deepening search

A strategy to keep DFS from going down an infinite path is to use depth-limited search, where a depth limit $l$ is supplied - and all nodes at this depth are treated as if they do not have any successors. This means if the depth level chosen is less than where our solution lies, it will not converge.

An iterative-deeping seach solves the problem by iteratively incrementing the depth level until a solution is found or a cutoff is reached. It combines the best of both BFS and DFS.

Cost-optimality and Completeness
The algorithm is cost-optimal for problems where all actions have the same cost, and is complete when we check for cycles all the way up the path.

Space and time complexity
Minimal memory footprint: $O(bd)$ when there is a solution, or $O(bm)$ when none exists on finite state spaces.

Time complexity is $O(b^d)$ when there is solution or $O(b^m)$ when none exists.

A key distinction between the vanilla BFS and the BFS-like search of iterative deepening is that the latter does not store the states in memory, but instead regenerates them as they are revistied. This saves memory at the cost of speed. This may appear wasteful, but asymptotically similar to the BFS. This is due to most nodes at deepeer levels being renegerated less often compared to the ones at shallower depths.

Performance Comparison

MeasureBreadth-FirstUniform-CostDepth-FIrstIterative Deepening
ComplenessYesYesNoYes
Cost-OptimalYesYesNoYes
Time$O(b^d)$O(b\raisebox{0.25em}{$[1+C^*/ε]$})$O(b^m)$$O(bd)$
Space$O(b^d)$$O(b\raisebox{0.25em}{$[1+C^*/ε]$})$$O(bm)$$O(bd)$
© 2022 by ashique mahmood. all rights reserved.