diff --git a/.jekyll-metadata b/.jekyll-metadata index 537c08bbd4..9dff658dc0 100644 Binary files a/.jekyll-metadata and b/.jekyll-metadata differ diff --git a/Gemfile.lock b/Gemfile.lock index b13b1fd3b7..599f4f1714 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -1,8 +1,8 @@ GEM remote: https://rubygems.org/ specs: - addressable (2.6.0) - public_suffix (>= 2.0.2, < 4.0) + addressable (2.7.0) + public_suffix (>= 2.0.2, < 5.0) colorator (1.1.0) concurrent-ruby (1.1.5) em-websocket (0.5.1) @@ -16,7 +16,7 @@ GEM http_parser.rb (0.6.0) i18n (0.9.5) concurrent-ruby (~> 1.0) - jekyll (3.8.5) + jekyll (3.8.6) addressable (~> 2.4) colorator (~> 1.0) em-websocket (~> 0.5) @@ -36,19 +36,17 @@ GEM listen (~> 3.0) kramdown (1.17.0) liquid (4.0.3) - listen (3.1.5) - rb-fsevent (~> 0.9, >= 0.9.4) - rb-inotify (~> 0.9, >= 0.9.7) - ruby_dep (~> 1.2) + listen (3.2.0) + rb-fsevent (~> 0.10, >= 0.10.3) + rb-inotify (~> 0.9, >= 0.9.10) mercenary (0.3.6) pathutil (0.16.2) forwardable-extended (~> 2.6) - public_suffix (3.1.0) + public_suffix (4.0.1) rb-fsevent (0.10.3) rb-inotify (0.10.0) ffi (~> 1.0) - rouge (3.3.0) - ruby_dep (1.5.0) + rouge (3.11.1) safe_yaml (1.0.5) sass (3.7.4) sass-listen (~> 4.0.0) diff --git a/_site/advanced-search-exercises/ex_12/index.html b/_site/advanced-search-exercises/ex_12/index.html index b79e91911a..7c9d740563 100644 --- a/_site/advanced-search-exercises/ex_12/index.html +++ b/_site/advanced-search-exercises/ex_12/index.html @@ -170,9 +170,9 @@

Exercise path-planning-exercise into an environment as follows:
-- The percept will be a list of the positions, *relative to the - agent*, of the visible vertices. The percept does - *not* include the position of the robot! The robot must +- The percept will be a list of the positions, relative to the + agent, of the visible vertices. The percept does + not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”
@@ -181,7 +181,7 @@

otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment - teleports the agent to a *random location* (not inside + teleports the agent to a random location (not inside an obstacle).
- The performance measure charges the agent 1 point for each unit of @@ -240,9 +240,9 @@

Exercise path-planning-exercise into an environment as follows:
-- The percept will be a list of the positions, *relative to the - agent*, of the visible vertices. The percept does - *not* include the position of the robot! The robot must +- The percept will be a list of the positions, relative to the + agent, of the visible vertices. The percept does + not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”
@@ -251,7 +251,7 @@

otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment - teleports the agent to a *random location* (not inside + teleports the agent to a random location (not inside an obstacle).
- The performance measure charges the agent 1 point for each unit of diff --git a/_site/advanced-search-exercises/ex_13/index.html b/_site/advanced-search-exercises/ex_13/index.html index 4c75380105..13c368c4a3 100644 --- a/_site/advanced-search-exercises/ex_13/index.html +++ b/_site/advanced-search-exercises/ex_13/index.html @@ -170,8 +170,8 @@

maze environment like the one shown in Figure maze-3x3-figure. The agent knows that its initial location is (1,1), that the goal is at (3,3), and that the -actions *Up*, *Down*, *Left*, *Right* have their usual -effects unless blocked by a wall. The agent does *not* know +actions Up, Down, Left, Right have their usual +effects unless blocked by a wall. The agent does not know where the internal walls are. In any given state, the agent perceives the set of legal actions; it can also tell whether the state is one it has visited before.
@@ -186,8 +186,8 @@

3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?
-Notice that this contingency plan is a solution for *every -possible environment* fitting the given description. Therefore, +Notice that this contingency plan is a solution for every +possible environment fitting the given description. Therefore, interleaving of search and execution is not strictly necessary even in unknown environments. @@ -214,8 +214,8 @@

maze environment like the one shown in Figure maze-3x3-figure. The agent knows that its initial location is (1,1), that the goal is at (3,3), and that the -actions *Up*, *Down*, *Left*, *Right* have their usual -effects unless blocked by a wall. The agent does *not* know +actions Up, Down, Left, Right have their usual +effects unless blocked by a wall. The agent does not know where the internal walls are. In any given state, the agent perceives the set of legal actions; it can also tell whether the state is one it has visited before.
@@ -230,8 +230,8 @@

3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?
-Notice that this contingency plan is a solution for *every -possible environment* fitting the given description. Therefore, +Notice that this contingency plan is a solution for every +possible environment fitting the given description. Therefore, interleaving of search and execution is not strictly necessary even in unknown environments.

diff --git a/_site/advanced-search-exercises/ex_5/index.html b/_site/advanced-search-exercises/ex_5/index.html index c530db7e90..4888ebcaeb 100644 --- a/_site/advanced-search-exercises/ex_5/index.html +++ b/_site/advanced-search-exercises/ex_5/index.html @@ -166,11 +166,11 @@

-The **And-Or-Graph-Search** algorithm in +The And-Or-Graph-Search algorithm in Figure and-or-graph-search-algorithm checks for repeated states only on the path from the root to the current state. Suppose that, in addition, the algorithm were to store -*every* visited state and check against that list. (See in +every visited state and check against that list. (See in Figure breadth-first-search-algorithm for an example.) Determine the information that should be stored and how the algorithm should use that information when a repeated state is found. @@ -199,11 +199,11 @@

-The **And-Or-Graph-Search** algorithm in +The And-Or-Graph-Search algorithm in Figure and-or-graph-search-algorithm checks for repeated states only on the path from the root to the current state. Suppose that, in addition, the algorithm were to store -*every* visited state and check against that list. (See in +every visited state and check against that list. (See in Figure breadth-first-search-algorithm for an example.) Determine the information that should be stored and how the algorithm should use that information when a repeated state is found. diff --git a/_site/advanced-search-exercises/ex_6/index.html b/_site/advanced-search-exercises/ex_6/index.html index 0ecd0cf3d3..056490697b 100644 --- a/_site/advanced-search-exercises/ex_6/index.html +++ b/_site/advanced-search-exercises/ex_6/index.html @@ -166,10 +166,10 @@

-Explain precisely how to modify the **And-Or-Graph-Search** algorithm to +Explain precisely how to modify the And-Or-Graph-Search algorithm to generate a cyclic plan if no acyclic plan exists. You will need to deal with three issues: labeling the plan steps so that a cyclic plan can -point back to an earlier part of the plan, modifying **Or-Search** so that it +point back to an earlier part of the plan, modifying Or-Search so that it continues to look for acyclic plans after finding a cyclic plan, and augmenting the plan representation to indicate whether a plan is cyclic. Show how your algorithm works on (a) the slippery vacuum world, and (b) @@ -195,10 +195,10 @@

-Explain precisely how to modify the **And-Or-Graph-Search** algorithm to +Explain precisely how to modify the And-Or-Graph-Search algorithm to generate a cyclic plan if no acyclic plan exists. You will need to deal with three issues: labeling the plan steps so that a cyclic plan can -point back to an earlier part of the plan, modifying **Or-Search** so that it +point back to an earlier part of the plan, modifying Or-Search so that it continues to look for acyclic plans after finding a cyclic plan, and augmenting the plan representation to indicate whether a plan is cyclic. Show how your algorithm works on (a) the slippery vacuum world, and (b) diff --git a/_site/advanced-search-exercises/ex_9/index.html b/_site/advanced-search-exercises/ex_9/index.html index 841d6c8ba4..45a1f60734 100644 --- a/_site/advanced-search-exercises/ex_9/index.html +++ b/_site/advanced-search-exercises/ex_9/index.html @@ -173,8 +173,8 @@

optimality still make sense in this context, or does it require modification? Consider also various possible definitions of the “cost” of executing an action in a belief state; for example, we could use the -*minimum* of the physical costs; or the -*maximum*; or a cost *interval* with the lower +minimum of the physical costs; or the +maximum; or a cost interval with the lower bound being the minimum cost and the upper bound being the maximum; or just keep the set of all possible costs for that action. For each of these, explore whether A* (with modifications if necessary) can return @@ -206,8 +206,8 @@

optimality still make sense in this context, or does it require modification? Consider also various possible definitions of the “cost” of executing an action in a belief state; for example, we could use the -*minimum* of the physical costs; or the -*maximum*; or a cost *interval* with the lower +minimum of the physical costs; or the +maximum; or a cost interval with the lower bound being the minimum cost and the upper bound being the maximum; or just keep the set of all possible costs for that action. For each of these, explore whether A* (with modifications if necessary) can return diff --git a/_site/advanced-search-exercises/index.html b/_site/advanced-search-exercises/index.html index 7eb291f2db..73a1d78888 100644 --- a/_site/advanced-search-exercises/index.html +++ b/_site/advanced-search-exercises/index.html @@ -258,11 +258,11 @@

4. Beyond Classical Search

-The **And-Or-Graph-Search** algorithm in +The And-Or-Graph-Search algorithm in Figure and-or-graph-search-algorithm checks for repeated states only on the path from the root to the current state. Suppose that, in addition, the algorithm were to store -*every* visited state and check against that list. (See in +every visited state and check against that list. (See in Figure breadth-first-search-algorithm for an example.) Determine the information that should be stored and how the algorithm should use that information when a repeated state is found. @@ -286,10 +286,10 @@

4. Beyond Classical Search

-Explain precisely how to modify the **And-Or-Graph-Search** algorithm to +Explain precisely how to modify the And-Or-Graph-Search algorithm to generate a cyclic plan if no acyclic plan exists. You will need to deal with three issues: labeling the plan steps so that a cyclic plan can -point back to an earlier part of the plan, modifying **Or-Search** so that it +point back to an earlier part of the plan, modifying Or-Search so that it continues to look for acyclic plans after finding a cyclic plan, and augmenting the plan representation to indicate whether a plan is cyclic. Show how your algorithm works on (a) the slippery vacuum world, and (b) @@ -367,8 +367,8 @@

4. Beyond Classical Search

optimality still make sense in this context, or does it require modification? Consider also various possible definitions of the “cost” of executing an action in a belief state; for example, we could use the -*minimum* of the physical costs; or the -*maximum*; or a cost *interval* with the lower +minimum of the physical costs; or the +maximum; or a cost interval with the lower bound being the minimum cost and the upper bound being the maximum; or just keep the set of all possible costs for that action. For each of these, explore whether A* (with modifications if necessary) can return @@ -430,9 +430,9 @@

4. Beyond Classical Search

Exercise path-planning-exercise into an environment as follows:
-- The percept will be a list of the positions, *relative to the - agent*, of the visible vertices. The percept does - *not* include the position of the robot! The robot must +- The percept will be a list of the positions, relative to the + agent, of the visible vertices. The percept does + not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”
@@ -441,7 +441,7 @@

4. Beyond Classical Search

otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment - teleports the agent to a *random location* (not inside + teleports the agent to a random location (not inside an obstacle).
- The performance measure charges the agent 1 point for each unit of @@ -495,8 +495,8 @@

4. Beyond Classical Search

maze environment like the one shown in Figure maze-3x3-figure. The agent knows that its initial location is (1,1), that the goal is at (3,3), and that the -actions *Up*, *Down*, *Left*, *Right* have their usual -effects unless blocked by a wall. The agent does *not* know +actions Up, Down, Left, Right have their usual +effects unless blocked by a wall. The agent does not know where the internal walls are. In any given state, the agent perceives the set of legal actions; it can also tell whether the state is one it has visited before.
@@ -511,8 +511,8 @@

4. Beyond Classical Search

3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?
-Notice that this contingency plan is a solution for *every -possible environment* fitting the given description. Therefore, +Notice that this contingency plan is a solution for every +possible environment fitting the given description. Therefore, interleaving of search and execution is not strictly necessary even in unknown environments.

diff --git a/_site/bayes-nets-exercises/ex_16/index.html b/_site/bayes-nets-exercises/ex_16/index.html index 382a82cf3b..505bd65dce 100644 --- a/_site/bayes-nets-exercises/ex_16/index.html +++ b/_site/bayes-nets-exercises/ex_16/index.html @@ -173,9 +173,9 @@

1. ${\textbf{P}}(B,I,M) = {\textbf{P}}(B){\textbf{P}}(I){\textbf{P}}(M)$.
- 2. ${\textbf{P}}(JG) = {\textbf{P}}(JG,I)$.
+ 2. ${\textbf{P}}(J|G) = {\textbf{P}}(J|G,I)$.
- 3. ${\textbf{P}}(MG,B,I) = {\textbf{P}}(MG,B,I,J)$.
+ 3. ${\textbf{P}}(M|G,B,I) = {\textbf{P}}(M|G,B,I,J)$.
2. Calculate the value of $P(b,i,\lnot m,g,j)$.
@@ -191,7 +191,7 @@

Figure politics-figure?
5. Suppose we want to add the variable - $P{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.
politics-figure @@ -224,9 +224,9 @@

1. ${\textbf{P}}(B,I,M) = {\textbf{P}}(B){\textbf{P}}(I){\textbf{P}}(M)$.
- 2. ${\textbf{P}}(JG) = {\textbf{P}}(JG,I)$.
+ 2. ${\textbf{P}}(J|G) = {\textbf{P}}(J|G,I)$.
- 3. ${\textbf{P}}(MG,B,I) = {\textbf{P}}(MG,B,I,J)$.
+ 3. ${\textbf{P}}(M|G,B,I) = {\textbf{P}}(M|G,B,I,J)$.
2. Calculate the value of $P(b,i,\lnot m,g,j)$.
@@ -242,7 +242,7 @@

Figure politics-figure?
5. Suppose we want to add the variable - $P{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.
politics-figure diff --git a/_site/bayes-nets-exercises/ex_17/index.html b/_site/bayes-nets-exercises/ex_17/index.html index f18bee3521..33e87a0efd 100644 --- a/_site/bayes-nets-exercises/ex_17/index.html +++ b/_site/bayes-nets-exercises/ex_17/index.html @@ -173,9 +173,9 @@

1. ${\textbf{P}}(B,I,M) = {\textbf{P}}(B){\textbf{P}}(I){\textbf{P}}(M)$.
- 2. ${\textbf{P}}(JG) = {\textbf{P}}(JG,I)$.
+ 2. ${\textbf{P}}(J|G) = {\textbf{P}}(J|G,I)$.
- 3. ${\textbf{P}}(MG,B,I) = {\textbf{P}}(MG,B,I,J)$.
+ 3. ${\textbf{P}}(M|G,B,I) = {\textbf{P}}(M|G,B,I,J)$.
2. Calculate the value of $P(b,i,\lnot m,g,j)$.
@@ -191,7 +191,7 @@

Figure politics-figure?
5. Suppose we want to add the variable - $P{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.

@@ -220,9 +220,9 @@

1. ${\textbf{P}}(B,I,M) = {\textbf{P}}(B){\textbf{P}}(I){\textbf{P}}(M)$.
- 2. ${\textbf{P}}(JG) = {\textbf{P}}(JG,I)$.
+ 2. ${\textbf{P}}(J|G) = {\textbf{P}}(J|G,I)$.
- 3. ${\textbf{P}}(MG,B,I) = {\textbf{P}}(MG,B,I,J)$.
+ 3. ${\textbf{P}}(M|G,B,I) = {\textbf{P}}(M|G,B,I,J)$.
2. Calculate the value of $P(b,i,\lnot m,g,j)$.
@@ -238,7 +238,7 @@

Figure politics-figure?
5. Suppose we want to add the variable - $P{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.

diff --git a/_site/bayes-nets-exercises/ex_4/index.html b/_site/bayes-nets-exercises/ex_4/index.html index 7edb080b1c..bb8e929a2a 100644 --- a/_site/bayes-nets-exercises/ex_4/index.html +++ b/_site/bayes-nets-exercises/ex_4/index.html @@ -184,8 +184,8 @@

3. Let the parents of $X$ be $\textbf{U} \cup \textbf{V}$ and the parents of $Y$ be $\textbf{V} \cup \textbf{W}$, where $\textbf{U}$ and $\textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$\begin{aligned} - {\textbf{P}}(Y\textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y\textbf{V},\textbf{W}, x) {\textbf{P}}(x\textbf{U}, \textbf{V}) \\ - {\textbf{P}}(X\textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(YX, \textbf{V}, \textbf{W}) {\textbf{P}}(X\textbf{U}, \textbf{V}) / {\textbf{P}}(Y\textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ + {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y | \textbf{V},\textbf{W}, x) {\textbf{P}}(x | \textbf{U}, \textbf{V}) \\ + {\textbf{P}}(X | \textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(Y | X, \textbf{V}, \textbf{W}) {\textbf{P}}(X | \textbf{U}, \textbf{V}) / {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network.

@@ -226,8 +226,8 @@

3. Let the parents of $X$ be $\textbf{U} \cup \textbf{V}$ and the parents of $Y$ be $\textbf{V} \cup \textbf{W}$, where $\textbf{U}$ and $\textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$\begin{aligned} - {\textbf{P}}(Y\textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y\textbf{V},\textbf{W}, x) {\textbf{P}}(x\textbf{U}, \textbf{V}) \\ - {\textbf{P}}(X\textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(YX, \textbf{V}, \textbf{W}) {\textbf{P}}(X\textbf{U}, \textbf{V}) / {\textbf{P}}(Y\textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ + {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y | \textbf{V},\textbf{W}, x) {\textbf{P}}(x | \textbf{U}, \textbf{V}) \\ + {\textbf{P}}(X | \textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(Y | X, \textbf{V}, \textbf{W}) {\textbf{P}}(X | \textbf{U}, \textbf{V}) / {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network.

diff --git a/_site/bayes-nets-exercises/index.html b/_site/bayes-nets-exercises/index.html index 1af6f81700..bad7624be1 100644 --- a/_site/bayes-nets-exercises/index.html +++ b/_site/bayes-nets-exercises/index.html @@ -270,8 +270,8 @@

14. Probabilistic Reasoning

3. Let the parents of $X$ be $\textbf{U} \cup \textbf{V}$ and the parents of $Y$ be $\textbf{V} \cup \textbf{W}$, where $\textbf{U}$ and $\textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$\begin{aligned} - {\textbf{P}}(Y\textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y\textbf{V},\textbf{W}, x) {\textbf{P}}(x\textbf{U}, \textbf{V}) \\ - {\textbf{P}}(X\textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(YX, \textbf{V}, \textbf{W}) {\textbf{P}}(X\textbf{U}, \textbf{V}) / {\textbf{P}}(Y\textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ + {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y | \textbf{V},\textbf{W}, x) {\textbf{P}}(x | \textbf{U}, \textbf{V}) \\ + {\textbf{P}}(X | \textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(Y | X, \textbf{V}, \textbf{W}) {\textbf{P}}(X | \textbf{U}, \textbf{V}) / {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network.

@@ -625,9 +625,9 @@

14. Probabilistic Reasoning

1. ${\textbf{P}}(B,I,M) = {\textbf{P}}(B){\textbf{P}}(I){\textbf{P}}(M)$.
- 2. ${\textbf{P}}(JG) = {\textbf{P}}(JG,I)$.
+ 2. ${\textbf{P}}(J|G) = {\textbf{P}}(J|G,I)$.
- 3. ${\textbf{P}}(MG,B,I) = {\textbf{P}}(MG,B,I,J)$.
+ 3. ${\textbf{P}}(M|G,B,I) = {\textbf{P}}(M|G,B,I,J)$.
2. Calculate the value of $P(b,i,\lnot m,g,j)$.
@@ -643,7 +643,7 @@

14. Probabilistic Reasoning

Figure politics-figure?
5. Suppose we want to add the variable - $P{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.
politics-figure @@ -669,9 +669,9 @@

14. Probabilistic Reasoning

1. ${\textbf{P}}(B,I,M) = {\textbf{P}}(B){\textbf{P}}(I){\textbf{P}}(M)$.
- 2. ${\textbf{P}}(JG) = {\textbf{P}}(JG,I)$.
+ 2. ${\textbf{P}}(J|G) = {\textbf{P}}(J|G,I)$.
- 3. ${\textbf{P}}(MG,B,I) = {\textbf{P}}(MG,B,I,J)$.
+ 3. ${\textbf{P}}(M|G,B,I) = {\textbf{P}}(M|G,B,I,J)$.
2. Calculate the value of $P(b,i,\lnot m,g,j)$.
@@ -687,7 +687,7 @@

14. Probabilistic Reasoning

Figure politics-figure?
5. Suppose we want to add the variable - $P{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.

diff --git a/_site/complex-decisions-exercises/ex_22/index.html b/_site/complex-decisions-exercises/ex_22/index.html index ba1ec60b1c..b5697fa134 100644 --- a/_site/complex-decisions-exercises/ex_22/index.html +++ b/_site/complex-decisions-exercises/ex_22/index.html @@ -169,11 +169,17 @@

The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game between politicians and the Federal Reserve.
-| | Fed: contract | Fed: do nothing | Fed: expand | -| --- | --- | --- | --- | -| **Pol: contract** | $F=7, P=1$ | $F=9,P=4$ | $F=6,P=6$ | -| **Pol: do nothing** | $F=8, P=2$ | $F=5,P=5$ | $F=4,P=9$ | -| **Pol: expand** | $F=3, P=3$ | $F=2,P=7$ | $F=1,P=8$ | +$$ +\begin{array} + {|r|r|}\hline & Fed: contract & Fed: do nothing & Fed: expand \\ + \hline + Pol: contract & F=7, P=1 & F=9, P=4 & F=6, P=6 \\ + Pol: do nothing & F=8, P=2 & F=5, P=5 & F=4, P=9 \\ + Pol: expand & F=3, P=3 & F=2, P=7 & F=1, P=8\\ + \hline +\end{array} +$$ +
Politicians can expand or contract fiscal policy, while the Fed can expand or contract monetary policy. (And of course either side can @@ -206,11 +212,17 @@

The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game between politicians and the Federal Reserve.
-| | Fed: contract | Fed: do nothing | Fed: expand | -| --- | --- | --- | --- | -| **Pol: contract** | $F=7, P=1$ | $F=9,P=4$ | $F=6,P=6$ | -| **Pol: do nothing** | $F=8, P=2$ | $F=5,P=5$ | $F=4,P=9$ | -| **Pol: expand** | $F=3, P=3$ | $F=2,P=7$ | $F=1,P=8$ | +$$ +\begin{array} + {|r|r|}\hline & Fed: contract & Fed: do nothing & Fed: expand \\ + \hline + Pol: contract & F=7, P=1 & F=9, P=4 & F=6, P=6 \\ + Pol: do nothing & F=8, P=2 & F=5, P=5 & F=4, P=9 \\ + Pol: expand & F=3, P=3 & F=2, P=7 & F=1, P=8\\ + \hline +\end{array} +$$ +
Politicians can expand or contract fiscal policy, while the Fed can expand or contract monetary policy. (And of course either side can diff --git a/_site/complex-decisions-exercises/index.html b/_site/complex-decisions-exercises/index.html index 4d5231b33a..4afc3e5fad 100644 --- a/_site/complex-decisions-exercises/index.html +++ b/_site/complex-decisions-exercises/index.html @@ -640,11 +640,17 @@

17. Making Complex Decisions

The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game between politicians and the Federal Reserve.
-| | Fed: contract | Fed: do nothing | Fed: expand | -| --- | --- | --- | --- | -| **Pol: contract** | $F=7, P=1$ | $F=9,P=4$ | $F=6,P=6$ | -| **Pol: do nothing** | $F=8, P=2$ | $F=5,P=5$ | $F=4,P=9$ | -| **Pol: expand** | $F=3, P=3$ | $F=2,P=7$ | $F=1,P=8$ | +$$ +\begin{array} + {|r|r|}\hline & Fed: contract & Fed: do nothing & Fed: expand \\ + \hline + Pol: contract & F=7, P=1 & F=9, P=4 & F=6, P=6 \\ + Pol: do nothing & F=8, P=2 & F=5, P=5 & F=4, P=9 \\ + Pol: expand & F=3, P=3 & F=2, P=7 & F=1, P=8\\ + \hline +\end{array} +$$ +
Politicians can expand or contract fiscal policy, while the Fed can expand or contract monetary policy. (And of course either side can diff --git a/_site/concept-learning-exercises/ex_16/index.html b/_site/concept-learning-exercises/ex_16/index.html index 75c03d9300..79ea893cc4 100644 --- a/_site/concept-learning-exercises/ex_16/index.html +++ b/_site/concept-learning-exercises/ex_16/index.html @@ -174,17 +174,20 @@

classify the same number of examples, then break the tie using attributes with lower index numbers (e.g., select $A_1$ over $A_2$).
- -| | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad A_y\quad$ | $\quad y\quad$ | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | 1 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 1 | 1 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 1 | 0 | 0 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | 1 | -| $\textbf{x}_6$ | 0 | 1 | 0 | 1 | 0 | -| $\textbf{x}_7$ | 0 | 0 | 1 | 1 | 1 | -| $\textbf{x}_8$ | 0 | 0 | 1 | 0 | 0 | +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 & 1 \\ + \textbf{x}_2 & 1 & 0 & 1 & 1 & 1 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 & 1 \\ + \textbf{x}_4 & 0 & 1 & 1 & 0 & 0 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 & 1 \\ + \textbf{x}_6 & 0 & 1 & 0 & 1 & 0 \\ + \textbf{x}_7 & 0 & 0 & 1 & 1 & 1 \\ + \textbf{x}_8 & 0 & 0 & 1 & 0 & 0 \\ + \hline +\end{array} +$$

@@ -213,17 +216,20 @@

classify the same number of examples, then break the tie using attributes with lower index numbers (e.g., select $A_1$ over $A_2$).
- -| | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad A_y\quad$ | $\quad y\quad$ | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | 1 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 1 | 1 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 1 | 0 | 0 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | 1 | -| $\textbf{x}_6$ | 0 | 1 | 0 | 1 | 0 | -| $\textbf{x}_7$ | 0 | 0 | 1 | 1 | 1 | -| $\textbf{x}_8$ | 0 | 0 | 1 | 0 | 0 | +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 & 1 \\ + \textbf{x}_2 & 1 & 0 & 1 & 1 & 1 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 & 1 \\ + \textbf{x}_4 & 0 & 1 & 1 & 0 & 0 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 & 1 \\ + \textbf{x}_6 & 0 & 1 & 0 & 1 & 0 \\ + \textbf{x}_7 & 0 & 0 & 1 & 1 & 1 \\ + \textbf{x}_8 & 0 & 0 & 1 & 0 & 0 \\ + \hline +\end{array} +$$

diff --git a/_site/concept-learning-exercises/ex_21/index.html b/_site/concept-learning-exercises/ex_21/index.html index ded0afa4fa..896dc1de90 100644 --- a/_site/concept-learning-exercises/ex_21/index.html +++ b/_site/concept-learning-exercises/ex_21/index.html @@ -166,7 +166,7 @@

-Figure <ahref=""#">kernel-machine-figure</a> +Figure kernel-machine-figure showed how a circle at the origin can be linearly separated by mapping from the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$. But what if the circle is not located at the origin? What if it is an @@ -202,7 +202,7 @@

-Figure kernel-machine-figure +Figure kernel-machine-figure showed how a circle at the origin can be linearly separated by mapping from the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$. But what if the circle is not located at the origin? What if it is an diff --git a/_site/concept-learning-exercises/ex_27/index.html b/_site/concept-learning-exercises/ex_27/index.html index 0d7964f868..0b9ddb5e06 100644 --- a/_site/concept-learning-exercises/ex_27/index.html +++ b/_site/concept-learning-exercises/ex_27/index.html @@ -169,15 +169,23 @@

Consider the following set of examples, each with six inputs and one target output:
-| | | | | | | | | | | | | | | | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | -| $\textbf{x}_3$ | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | -| $\textbf{x}_5$ | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | -| $\textbf{x}_6$ | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | -| $\textbf{T}$ | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | + + +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & A_5 & A_6 & A_7 & A_8 & A_9 & A_{10} & A_{11} & A_{12} & A_{13} & A_{14} \\ + \hline + \textbf{x}_1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \textbf{x}_2 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 \\ + \textbf{x}_3 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ + \textbf{x}_4 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\ + \textbf{x}_5 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ + \textbf{x}_6 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ + \textbf{T} & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \hline +\end{array} +$$ + 1. Run the perceptron learning rule on these data and show the @@ -210,15 +218,23 @@

Consider the following set of examples, each with six inputs and one target output:
-| | | | | | | | | | | | | | | | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | -| $\textbf{x}_3$ | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | -| $\textbf{x}_5$ | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | -| $\textbf{x}_6$ | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | -| $\textbf{T}$ | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | + + +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & A_5 & A_6 & A_7 & A_8 & A_9 & A_{10} & A_{11} & A_{12} & A_{13} & A_{14} \\ + \hline + \textbf{x}_1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \textbf{x}_2 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 \\ + \textbf{x}_3 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ + \textbf{x}_4 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\ + \textbf{x}_5 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ + \textbf{x}_6 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ + \textbf{T} & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \hline +\end{array} +$$ + 1. Run the perceptron learning rule on these data and show the diff --git a/_site/concept-learning-exercises/ex_7/index.html b/_site/concept-learning-exercises/ex_7/index.html index 7cc82fff80..bb8ad7905e 100644 --- a/_site/concept-learning-exercises/ex_7/index.html +++ b/_site/concept-learning-exercises/ex_7/index.html @@ -166,7 +166,7 @@

-\[nonnegative-gain-exercise\]Suppose that an attribute splits the set of +Suppose that an attribute splits the set of examples $E$ into subsets $E_k$ and that each subset has $p_k$ positive examples and $n_k$ negative examples. Show that the attribute has strictly positive information gain unless the ratio @@ -191,7 +191,7 @@

-\[nonnegative-gain-exercise\]Suppose that an attribute splits the set of +Suppose that an attribute splits the set of examples $E$ into subsets $E_k$ and that each subset has $p_k$ positive examples and $n_k$ negative examples. Show that the attribute has strictly positive information gain unless the ratio diff --git a/_site/concept-learning-exercises/ex_8/index.html b/_site/concept-learning-exercises/ex_8/index.html index b5b721d2df..21f3adc1d7 100644 --- a/_site/concept-learning-exercises/ex_8/index.html +++ b/_site/concept-learning-exercises/ex_8/index.html @@ -169,15 +169,17 @@

Consider the following data set comprised of three binary input attributes ($A_1, A_2$, and $A_3$) and one binary output:
-| $\quad \textbf{Example}$ | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad Output\space y$ | -| --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 0 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | -| $\textbf{x}_4$ | 1 | 1 | 1 | 1 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | - - +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & Output\space y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 \\ + \textbf{x}_2 & 1 & 0 & 1 & 0 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 \\ + \textbf{x}_4 & 1 & 1 & 1 & 1 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 \\ + \hline +\end{array} +$$ Use the algorithm in Figure DTL-algorithm (page DTL-algorithm) to learn a decision tree for these data. Show the computations made to determine the attribute to split at each node. @@ -204,15 +206,17 @@

Consider the following data set comprised of three binary input attributes ($A_1, A_2$, and $A_3$) and one binary output:
-| $\quad \textbf{Example}$ | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad Output\space y$ | -| --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 0 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | -| $\textbf{x}_4$ | 1 | 1 | 1 | 1 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | - - +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & Output\space y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 \\ + \textbf{x}_2 & 1 & 0 & 1 & 0 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 \\ + \textbf{x}_4 & 1 & 1 & 1 & 1 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 \\ + \hline +\end{array} +$$ Use the algorithm in Figure DTL-algorithm (page DTL-algorithm) to learn a decision tree for these data. Show the computations made to determine the attribute to split at each node. diff --git a/_site/concept-learning-exercises/index.html b/_site/concept-learning-exercises/index.html index 3a4816ce51..90206d1642 100644 --- a/_site/concept-learning-exercises/index.html +++ b/_site/concept-learning-exercises/index.html @@ -267,7 +267,7 @@

18. Learning from Examples

-\[nonnegative-gain-exercise\]Suppose that an attribute splits the set of +Suppose that an attribute splits the set of examples $E$ into subsets $E_k$ and that each subset has $p_k$ positive examples and $n_k$ negative examples. Show that the attribute has strictly positive information gain unless the ratio @@ -288,15 +288,17 @@

18. Learning from Examples

Consider the following data set comprised of three binary input attributes ($A_1, A_2$, and $A_3$) and one binary output:
-| $\quad \textbf{Example}$ | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad Output\space y$ | -| --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 0 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | -| $\textbf{x}_4$ | 1 | 1 | 1 | 1 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | - - +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & Output\space y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 \\ + \textbf{x}_2 & 1 & 0 & 1 & 0 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 \\ + \textbf{x}_4 & 1 & 1 & 1 & 1 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 \\ + \hline +\end{array} +$$ Use the algorithm in Figure DTL-algorithm (page DTL-algorithm) to learn a decision tree for these data. Show the computations made to determine the attribute to split at each node. @@ -490,17 +492,20 @@

18. Learning from Examples

classify the same number of examples, then break the tie using attributes with lower index numbers (e.g., select $A_1$ over $A_2$).
- -| | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad A_y\quad$ | $\quad y\quad$ | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | 1 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 1 | 1 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 1 | 0 | 0 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | 1 | -| $\textbf{x}_6$ | 0 | 1 | 0 | 1 | 0 | -| $\textbf{x}_7$ | 0 | 0 | 1 | 1 | 1 | -| $\textbf{x}_8$ | 0 | 0 | 1 | 0 | 0 | +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 & 1 \\ + \textbf{x}_2 & 1 & 0 & 1 & 1 & 1 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 & 1 \\ + \textbf{x}_4 & 0 & 1 & 1 & 0 & 0 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 & 1 \\ + \textbf{x}_6 & 0 & 1 & 0 & 1 & 0 \\ + \textbf{x}_7 & 0 & 0 & 1 & 1 & 1 \\ + \textbf{x}_8 & 0 & 0 & 1 & 0 & 0 \\ + \hline +\end{array} +$$

@@ -593,7 +598,7 @@

18. Learning from Examples

-Figure <ahref=""#">kernel-machine-figure</a> +Figure kernel-machine-figure showed how a circle at the origin can be linearly separated by mapping from the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$. But what if the circle is not located at the origin? What if it is an @@ -716,15 +721,23 @@

18. Learning from Examples

Consider the following set of examples, each with six inputs and one target output:
-| | | | | | | | | | | | | | | | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | -| $\textbf{x}_3$ | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | -| $\textbf{x}_5$ | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | -| $\textbf{x}_6$ | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | -| $\textbf{T}$ | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | + + +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & A_5 & A_6 & A_7 & A_8 & A_9 & A_{10} & A_{11} & A_{12} & A_{13} & A_{14} \\ + \hline + \textbf{x}_1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \textbf{x}_2 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 \\ + \textbf{x}_3 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ + \textbf{x}_4 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\ + \textbf{x}_5 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ + \textbf{x}_6 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ + \textbf{T} & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \hline +\end{array} +$$ + 1. Run the perceptron learning rule on these data and show the diff --git a/_site/csp-exercises/ex_14/index.html b/_site/csp-exercises/ex_14/index.html index 03e02ee885..41b0fea376 100644 --- a/_site/csp-exercises/ex_14/index.html +++ b/_site/csp-exercises/ex_14/index.html @@ -166,8 +166,8 @@

-AC-3 puts back on the queue *every* arc -($X_{k}, X_{i}$) whenever *any* value is deleted from the +AC-3 puts back on the queue every arc +($X_{k}, X_{i}$) whenever any value is deleted from the domain of $X_{i}$, even if each value of $X_{k}$ is consistent with several remaining values of $X_{i}$. Suppose that, for every arc ($X_{k}, X_{i}$), we keep track of the number of remaining values of @@ -194,8 +194,8 @@

-AC-3 puts back on the queue *every* arc -($X_{k}, X_{i}$) whenever *any* value is deleted from the +AC-3 puts back on the queue every arc +($X_{k}, X_{i}$) whenever any value is deleted from the domain of $X_{i}$, even if each value of $X_{k}$ is consistent with several remaining values of $X_{i}$. Suppose that, for every arc ($X_{k}, X_{i}$), we keep track of the number of remaining values of diff --git a/_site/csp-exercises/index.html b/_site/csp-exercises/index.html index f798c645ac..3d362342a0 100644 --- a/_site/csp-exercises/index.html +++ b/_site/csp-exercises/index.html @@ -488,8 +488,8 @@

6. Constraint Satisfaction Problems<

-AC-3 puts back on the queue *every* arc -($X_{k}, X_{i}$) whenever *any* value is deleted from the +AC-3 puts back on the queue every arc +($X_{k}, X_{i}$) whenever any value is deleted from the domain of $X_{i}$, even if each value of $X_{k}$ is consistent with several remaining values of $X_{i}$. Suppose that, for every arc ($X_{k}, X_{i}$), we keep track of the number of remaining values of diff --git a/_site/dbn-exercises/ex_12/index.html b/_site/dbn-exercises/ex_12/index.html index e69eb61e00..928ac25f27 100644 --- a/_site/dbn-exercises/ex_12/index.html +++ b/_site/dbn-exercises/ex_12/index.html @@ -172,24 +172,24 @@

a series of distinct maneuvers that the missile may attempt to track. A Bayesian network representation of such a switching Kalman filter model is shown in -Figure switching-kf-figure. +Figure switching-kf-figure.

1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate - $${\textbf{P}}(\textbf{X}_0)$$ is a multivariate + ${\textbf{P}}(\textbf{X}_0)$ is a multivariate Gaussian distribution. Show that the prediction - $${\textbf{P}}(\textbf{X}_1)$$ is a mixture of + ${\textbf{P}}(\textbf{X}_1)$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such - that the weights sum to 1. + that the weights sum to 1.

2. Show that if the current continuous state estimate - $${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$$ is a mixture of $m$ Gaussians, + ${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate - $${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$$ will be a mixture of - $km$ Gaussians. + ${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$ will be a mixture of + $km$ Gaussians.

3. What aspect of the temporal process do the weights in the Gaussian - mixture represent? + mixture represent?

The results in (a) and (b) show that the representation of the posterior grows without limit even for switching Kalman filters, which are among @@ -220,24 +220,24 @@

a series of distinct maneuvers that the missile may attempt to track. A Bayesian network representation of such a switching Kalman filter model is shown in -Figure switching-kf-figure. +Figure switching-kf-figure.

1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate - $${\textbf{P}}(\textbf{X}_0)$$ is a multivariate + ${\textbf{P}}(\textbf{X}_0)$ is a multivariate Gaussian distribution. Show that the prediction - $${\textbf{P}}(\textbf{X}_1)$$ is a mixture of + ${\textbf{P}}(\textbf{X}_1)$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such - that the weights sum to 1. + that the weights sum to 1.

2. Show that if the current continuous state estimate - $${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$$ is a mixture of $m$ Gaussians, + ${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate - $${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$$ will be a mixture of - $km$ Gaussians. + ${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$ will be a mixture of + $km$ Gaussians.

3. What aspect of the temporal process do the weights in the Gaussian - mixture represent? + mixture represent?

The results in (a) and (b) show that the representation of the posterior grows without limit even for switching Kalman filters, which are among diff --git a/_site/dbn-exercises/ex_17/index.html b/_site/dbn-exercises/ex_17/index.html index 70981f7d19..9c11984733 100644 --- a/_site/dbn-exercises/ex_17/index.html +++ b/_site/dbn-exercises/ex_17/index.html @@ -169,22 +169,16 @@

For the DBN specified in Exercise sleep1-exercise and for the evidence values
-$$ -\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class -$$ +$\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class$
perform the following computations:
-1. State estimation: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:t})$$ for each +1. State estimation: Compute $P({EnoughSleep}_t | \textbf{e}_{1:t})$ for each of $t = 1,2,3$.
-2. Smoothing: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:3})$$ for each of +2. Smoothing: Compute $P({EnoughSleep}_t | \textbf{e}_{1:3})$ for each of $t = 1,2,3$.
3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.
@@ -211,22 +205,16 @@

For the DBN specified in Exercise sleep1-exercise and for the evidence values
-$$ -\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class -$$ +$\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class$
perform the following computations:
-1. State estimation: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:t})$$ for each +1. State estimation: Compute $P({EnoughSleep}_t | \textbf{e}_{1:t})$ for each of $t = 1,2,3$.
-2. Smoothing: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:3})$$ for each of +2. Smoothing: Compute $P({EnoughSleep}_t | \textbf{e}_{1:3})$ for each of $t = 1,2,3$.
3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.
diff --git a/_site/dbn-exercises/ex_3/index.html b/_site/dbn-exercises/ex_3/index.html index cfc93a09bd..c4558b7ce2 100644 --- a/_site/dbn-exercises/ex_3/index.html +++ b/_site/dbn-exercises/ex_3/index.html @@ -169,16 +169,16 @@

This exercise develops a space-efficient variant of the forward–backward algorithm described in Figure forward-backward-algorithm (page forward-backward-algorithm). -We wish to compute $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$$ for -$$k=1,\ldots ,t$$. This will be done with a divide-and-conquer +We wish to compute $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$ for +$k=1,\ldots ,t$. This will be done with a divide-and-conquer approach.
1. Suppose, for simplicity, that $t$ is odd, and let the halfway point - be $h=(t+1)/2$. Show that $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $$ + be $h=(t+1)/2$. Show that $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $ can be computed for $k=1,\ldots ,h$ given just the initial forward message - $$\textbf{f}_{1:0}$$, the backward message $$\textbf{b}_{h+1:t}$$, and the evidence - $$\textbf{e}_{1:h}$$.
+ $\textbf{f}_{1:0}$, the backward message $\textbf{b}_{h+1:t}$, and the evidence + $\textbf{e}_{1:h}$.
2. Show a similar result for the second half of the sequence.
@@ -214,16 +214,16 @@

This exercise develops a space-efficient variant of the forward–backward algorithm described in Figure forward-backward-algorithm (page forward-backward-algorithm). -We wish to compute $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$$ for -$$k=1,\ldots ,t$$. This will be done with a divide-and-conquer +We wish to compute $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$ for +$k=1,\ldots ,t$. This will be done with a divide-and-conquer approach.
1. Suppose, for simplicity, that $t$ is odd, and let the halfway point - be $h=(t+1)/2$. Show that $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $$ + be $h=(t+1)/2$. Show that $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $ can be computed for $k=1,\ldots ,h$ given just the initial forward message - $$\textbf{f}_{1:0}$$, the backward message $$\textbf{b}_{h+1:t}$$, and the evidence - $$\textbf{e}_{1:h}$$.
+ $\textbf{f}_{1:0}$, the backward message $\textbf{b}_{h+1:t}$, and the evidence + $\textbf{e}_{1:h}$.
2. Show a similar result for the second half of the sequence.
diff --git a/_site/dbn-exercises/ex_7/index.html b/_site/dbn-exercises/ex_7/index.html index b5bee685c1..35b5e5fdac 100644 --- a/_site/dbn-exercises/ex_7/index.html +++ b/_site/dbn-exercises/ex_7/index.html @@ -171,7 +171,7 @@

an equal probability of moving to any neighboring square. What if those assumptions are wrong? Suppose that the initial location is actually chosen uniformly from the northwest quadrant of the room and the action -actually tends to move southeast\[hmm-robot-southeast-page\]. Keeping +actually tends to move southeast. Keeping the HMM model fixed, explore the effect on localization and path accuracy as the southeasterly tendency increases, for different values of $\epsilon$. @@ -200,7 +200,7 @@

an equal probability of moving to any neighboring square. What if those assumptions are wrong? Suppose that the initial location is actually chosen uniformly from the northwest quadrant of the room and the action -actually tends to move southeast\[hmm-robot-southeast-page\]. Keeping +actually tends to move southeast. Keeping the HMM model fixed, explore the effect on localization and path accuracy as the southeasterly tendency increases, for different values of $\epsilon$. diff --git a/_site/dbn-exercises/index.html b/_site/dbn-exercises/index.html index f0620bf9bb..83c45b5309 100644 --- a/_site/dbn-exercises/index.html +++ b/_site/dbn-exercises/index.html @@ -212,16 +212,16 @@

15. Probabilistic Reasoning over T This exercise develops a space-efficient variant of the forward–backward algorithm described in Figure forward-backward-algorithm (page forward-backward-algorithm). -We wish to compute $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$$ for -$$k=1,\ldots ,t$$. This will be done with a divide-and-conquer +We wish to compute $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$ for +$k=1,\ldots ,t$. This will be done with a divide-and-conquer approach.
1. Suppose, for simplicity, that $t$ is odd, and let the halfway point - be $h=(t+1)/2$. Show that $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $$ + be $h=(t+1)/2$. Show that $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $ can be computed for $k=1,\ldots ,h$ given just the initial forward message - $$\textbf{f}_{1:0}$$, the backward message $$\textbf{b}_{h+1:t}$$, and the evidence - $$\textbf{e}_{1:h}$$.
+ $\textbf{f}_{1:0}$, the backward message $\textbf{b}_{h+1:t}$, and the evidence + $\textbf{e}_{1:h}$.
2. Show a similar result for the second half of the sequence.
@@ -311,7 +311,7 @@

15. Probabilistic Reasoning over T an equal probability of moving to any neighboring square. What if those assumptions are wrong? Suppose that the initial location is actually chosen uniformly from the northwest quadrant of the room and the action -actually tends to move southeast\[hmm-robot-southeast-page\]. Keeping +actually tends to move southeast. Keeping the HMM model fixed, explore the effect on localization and path accuracy as the southeasterly tendency increases, for different values of $\epsilon$. @@ -444,24 +444,24 @@

15. Probabilistic Reasoning over T a series of distinct maneuvers that the missile may attempt to track. A Bayesian network representation of such a switching Kalman filter model is shown in -Figure switching-kf-figure. +Figure switching-kf-figure.

1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate - $${\textbf{P}}(\textbf{X}_0)$$ is a multivariate + ${\textbf{P}}(\textbf{X}_0)$ is a multivariate Gaussian distribution. Show that the prediction - $${\textbf{P}}(\textbf{X}_1)$$ is a mixture of + ${\textbf{P}}(\textbf{X}_1)$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such - that the weights sum to 1. + that the weights sum to 1.

2. Show that if the current continuous state estimate - $${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$$ is a mixture of $m$ Gaussians, + ${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate - $${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$$ will be a mixture of - $km$ Gaussians. + ${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$ will be a mixture of + $km$ Gaussians.

3. What aspect of the temporal process do the weights in the Gaussian - mixture represent? + mixture represent?

The results in (a) and (b) show that the representation of the posterior grows without limit even for switching Kalman filters, which are among @@ -597,22 +597,16 @@

15. Probabilistic Reasoning over T For the DBN specified in Exercise sleep1-exercise and for the evidence values
-$$ -\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class -$$ +$\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class$
perform the following computations:
-1. State estimation: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:t})$$ for each +1. State estimation: Compute $P({EnoughSleep}_t | \textbf{e}_{1:t})$ for each of $t = 1,2,3$.
-2. Smoothing: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:3})$$ for each of +2. Smoothing: Compute $P({EnoughSleep}_t | \textbf{e}_{1:3})$ for each of $t = 1,2,3$.
3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.
diff --git a/_site/decision-theory-exercises/ex_14/index.html b/_site/decision-theory-exercises/ex_14/index.html index a271de3921..0068913b3a 100644 --- a/_site/decision-theory-exercises/ex_14/index.html +++ b/_site/decision-theory-exercises/ex_14/index.html @@ -174,16 +174,16 @@

same units as $x$) becomes larger, the individual becomes less risk-averse.
-1. Assume Mary has an exponential utility function with $$R = \$500$$. - Mary is given the choice between receiving $$\$500$$ with certainty +1. Assume Mary has an exponential utility function with $R = \$500$. + Mary is given the choice between receiving $\$500$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% - probability of winning $$\$500$$ and a 50% probability of winning + probability of winning $\$500$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to @@ -216,16 +216,16 @@

same units as $x$) becomes larger, the individual becomes less risk-averse.
-1. Assume Mary has an exponential utility function with $$R = \$500$$. - Mary is given the choice between receiving $$\$500$$ with certainty +1. Assume Mary has an exponential utility function with $R = \$500$. + Mary is given the choice between receiving $\$500$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% - probability of winning $$\$500$$ and a 50% probability of winning + probability of winning $\$500$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to diff --git a/_site/decision-theory-exercises/ex_15/index.html b/_site/decision-theory-exercises/ex_15/index.html index a537dbd3dc..4815e348d0 100644 --- a/_site/decision-theory-exercises/ex_15/index.html +++ b/_site/decision-theory-exercises/ex_15/index.html @@ -175,13 +175,13 @@

risk-averse.
1. Assume Mary has an exponential utility function with $R = \$400$. - Mary is given the choice between receiving $$\$400$$ with certainty + Mary is given the choice between receiving $\$400$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning \$500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an @@ -217,13 +217,13 @@

risk-averse.
1. Assume Mary has an exponential utility function with $R = \$400$. - Mary is given the choice between receiving $$\$400$$ with certainty + Mary is given the choice between receiving $\$400$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning \$500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an diff --git a/_site/decision-theory-exercises/ex_16/index.html b/_site/decision-theory-exercises/ex_16/index.html index 33b11d87d0..de84a9c1b2 100644 --- a/_site/decision-theory-exercises/ex_16/index.html +++ b/_site/decision-theory-exercises/ex_16/index.html @@ -167,9 +167,9 @@

Alex is given the choice between two games. In Game 1, a fair coin is -flipped and if it comes up heads, Alex receives $$\$100$$. If the coin comes +flipped and if it comes up heads, Alex receives $\$100$. If the coin comes up tails, Alex receives nothing. In Game 2, a fair coin is flipped -twice. Each time the coin comes up heads, Alex receives $$\$50$$, and Alex +twice. Each time the coin comes up heads, Alex receives $\$50$, and Alex receives nothing for each coin flip that comes up tails. Assuming that Alex has a monotonically increasing utility function for money in the range \[\$0, \$100\], show mathematically that if Alex prefers Game 2 to @@ -200,9 +200,9 @@

Alex is given the choice between two games. In Game 1, a fair coin is -flipped and if it comes up heads, Alex receives $$\$100$$. If the coin comes +flipped and if it comes up heads, Alex receives $\$100$. If the coin comes up tails, Alex receives nothing. In Game 2, a fair coin is flipped -twice. Each time the coin comes up heads, Alex receives $$\$50$$, and Alex +twice. Each time the coin comes up heads, Alex receives $\$50$, and Alex receives nothing for each coin flip that comes up tails. Assuming that Alex has a monotonically increasing utility function for money in the range \[\$0, \$100\], show mathematically that if Alex prefers Game 2 to diff --git a/_site/decision-theory-exercises/ex_22/index.html b/_site/decision-theory-exercises/ex_22/index.html index 5b91876d93..16ad4c454b 100644 --- a/_site/decision-theory-exercises/ex_22/index.html +++ b/_site/decision-theory-exercises/ex_22/index.html @@ -174,10 +174,10 @@

is time to carry out at most one test, and that $t_1$ is the test of $c_1$ and costs \$50.
-A car can be in good shape (quality $$q^+$$) or bad shape (quality $q^-$), +A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$), and the tests might help indicate what shape the car is in. Car $c_1$ -costs \$1,500, and its market value is $$\$2,000$$ if it is in good shape; if -not, $$\$700$$ in repairs will be needed to make it in good shape. The buyer’s +costs \$1,500, and its market value is $\$2,000$ if it is in good shape; if +not, $\$700$ in repairs will be needed to make it in good shape. The buyer’s estimate is that $c_1$ has a 70% chance of being in good shape.
1. Draw the decision network that represents this problem.
@@ -188,9 +188,9 @@

fail the test given that the car is in good or bad shape. We have the following information:
- $$P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$$
+ $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$
- $$P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$$
+ $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$
Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.
@@ -227,10 +227,10 @@

is time to carry out at most one test, and that $t_1$ is the test of $c_1$ and costs \$50.
-A car can be in good shape (quality $$q^+$$) or bad shape (quality $q^-$), +A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$), and the tests might help indicate what shape the car is in. Car $c_1$ -costs \$1,500, and its market value is $$\$2,000$$ if it is in good shape; if -not, $$\$700$$ in repairs will be needed to make it in good shape. The buyer’s +costs \$1,500, and its market value is $\$2,000$ if it is in good shape; if +not, $\$700$ in repairs will be needed to make it in good shape. The buyer’s estimate is that $c_1$ has a 70% chance of being in good shape.
1. Draw the decision network that represents this problem.
@@ -241,9 +241,9 @@

fail the test given that the car is in good or bad shape. We have the following information:
- $$P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$$
+ $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$
- $$P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$$
+ $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$
Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.
diff --git a/_site/decision-theory-exercises/index.html b/_site/decision-theory-exercises/index.html index 715c62db9e..88a4c54e67 100644 --- a/_site/decision-theory-exercises/index.html +++ b/_site/decision-theory-exercises/index.html @@ -547,16 +547,16 @@

16. Making Simple Decisions

same units as $x$) becomes larger, the individual becomes less risk-averse.
-1. Assume Mary has an exponential utility function with $$R = \$500$$. - Mary is given the choice between receiving $$\$500$$ with certainty +1. Assume Mary has an exponential utility function with $R = \$500$. + Mary is given the choice between receiving $\$500$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% - probability of winning $$\$500$$ and a 50% probability of winning + probability of winning $\$500$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to @@ -583,13 +583,13 @@

16. Making Simple Decisions

risk-averse.
1. Assume Mary has an exponential utility function with $R = \$400$. - Mary is given the choice between receiving $$\$400$$ with certainty + Mary is given the choice between receiving $\$400$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning \$500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an @@ -610,9 +610,9 @@

16. Making Simple Decisions

Alex is given the choice between two games. In Game 1, a fair coin is -flipped and if it comes up heads, Alex receives $$\$100$$. If the coin comes +flipped and if it comes up heads, Alex receives $\$100$. If the coin comes up tails, Alex receives nothing. In Game 2, a fair coin is flipped -twice. Each time the coin comes up heads, Alex receives $$\$50$$, and Alex +twice. Each time the coin comes up heads, Alex receives $\$50$, and Alex receives nothing for each coin flip that comes up tails. Assuming that Alex has a monotonically increasing utility function for money in the range \[\$0, \$100\], show mathematically that if Alex prefers Game 2 to @@ -757,10 +757,10 @@

16. Making Simple Decisions

is time to carry out at most one test, and that $t_1$ is the test of $c_1$ and costs \$50.
-A car can be in good shape (quality $$q^+$$) or bad shape (quality $q^-$), +A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$), and the tests might help indicate what shape the car is in. Car $c_1$ -costs \$1,500, and its market value is $$\$2,000$$ if it is in good shape; if -not, $$\$700$$ in repairs will be needed to make it in good shape. The buyer’s +costs \$1,500, and its market value is $\$2,000$ if it is in good shape; if +not, $\$700$ in repairs will be needed to make it in good shape. The buyer’s estimate is that $c_1$ has a 70% chance of being in good shape.
1. Draw the decision network that represents this problem.
@@ -771,9 +771,9 @@

16. Making Simple Decisions

fail the test given that the car is in good or bad shape. We have the following information:
- $$P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$$
+ $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$
- $$P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$$
+ $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$
Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.
diff --git a/_site/fol-exercises/ex_1/index.html b/_site/fol-exercises/ex_1/index.html index 0a195d04d5..c9679f1313 100644 --- a/_site/fol-exercises/ex_1/index.html +++ b/_site/fol-exercises/ex_1/index.html @@ -175,14 +175,14 @@

two-dimensional structure of the map corresponds to the two-dimensional surface of the area.
-1. Give five examples of *symbols* in the map language.
+1. Give five examples of symbols in the map language.
-2. An *explicit* sentence is a sentence that the creator +2. An explicit sentence is a sentence that the creator of the representation actually writes down. An - *implicit* sentence is a sentence that results from + implicit sentence is a sentence that results from explicit sentences because of properties of the analogical - representation. Give three examples each of *implicit* - and *explicit* sentences in the map language.
+ representation. Give three examples each of implicit + and explicit sentences in the map language.
3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.
@@ -221,14 +221,14 @@

two-dimensional structure of the map corresponds to the two-dimensional surface of the area.
-1. Give five examples of *symbols* in the map language.
+1. Give five examples of symbols in the map language.
-2. An *explicit* sentence is a sentence that the creator +2. An explicit sentence is a sentence that the creator of the representation actually writes down. An - *implicit* sentence is a sentence that results from + implicit sentence is a sentence that results from explicit sentences because of properties of the analogical - representation. Give three examples each of *implicit* - and *explicit* sentences in the map language.
+ representation. Give three examples each of implicit + and explicit sentences in the map language.
3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.
diff --git a/_site/fol-exercises/index.html b/_site/fol-exercises/index.html index 3fe8caf9e5..5dd626cf0f 100644 --- a/_site/fol-exercises/index.html +++ b/_site/fol-exercises/index.html @@ -172,14 +172,14 @@

8. First Order Logic

two-dimensional structure of the map corresponds to the two-dimensional surface of the area.
-1. Give five examples of *symbols* in the map language.
+1. Give five examples of symbols in the map language.
-2. An *explicit* sentence is a sentence that the creator +2. An explicit sentence is a sentence that the creator of the representation actually writes down. An - *implicit* sentence is a sentence that results from + implicit sentence is a sentence that results from explicit sentences because of properties of the analogical - representation. Give three examples each of *implicit* - and *explicit* sentences in the map language.
+ representation. Give three examples each of implicit + and explicit sentences in the map language.
3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.
diff --git a/_site/kr-exercises/ex_25/index.html b/_site/kr-exercises/ex_25/index.html index 78f20da5c0..7f59a650c1 100644 --- a/_site/kr-exercises/ex_25/index.html +++ b/_site/kr-exercises/ex_25/index.html @@ -168,7 +168,7 @@

page description-logic-ex) into first-order logic, and comment on the result:
$$ -And(Man, AtLeast(3,Son), AtMost(2,Daughter),
All(Son,And(Unemployed,Married, All(Spouse,Doctor ))),
All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) +And(Man, AtLeast(3,Son), AtMost(2,Daughter), \\All(Son,And(Unemployed,Married, All(Spouse,Doctor ))), \\All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) $$

@@ -192,7 +192,7 @@

page description-logic-ex) into first-order logic, and comment on the result:
$$ -And(Man, AtLeast(3,Son), AtMost(2,Daughter),
All(Son,And(Unemployed,Married, All(Spouse,Doctor ))),
All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) +And(Man, AtLeast(3,Son), AtMost(2,Daughter), \\All(Son,And(Unemployed,Married, All(Spouse,Doctor ))), \\All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) $$

diff --git a/_site/kr-exercises/index.html b/_site/kr-exercises/index.html index ebcae3dc17..41da9d83ff 100644 --- a/_site/kr-exercises/index.html +++ b/_site/kr-exercises/index.html @@ -828,7 +828,7 @@

12. Knowledge Representation

page description-logic-ex) into first-order logic, and comment on the result:
$$ -And(Man, AtLeast(3,Son), AtMost(2,Daughter),
All(Son,And(Unemployed,Married, All(Spouse,Doctor ))),
All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) +And(Man, AtLeast(3,Son), AtMost(2,Daughter), \\All(Son,And(Unemployed,Married, All(Spouse,Doctor ))), \\All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) $$

diff --git a/_site/logical-inference-exercises/ex_14/index.html b/_site/logical-inference-exercises/ex_14/index.html index 62bf6edd77..56459ef790 100644 --- a/_site/logical-inference-exercises/ex_14/index.html +++ b/_site/logical-inference-exercises/ex_14/index.html @@ -170,7 +170,7 @@

U.S. census data listing the age, city of residence, date of birth, and mother of every person, using social security numbers as identifying constants for each person. Thus, George’s age is given by -${Age}(\mbox443-{65}-{1282}}, {56})$. Which of the following +${Age}(443-65-1282, 56)$. Which of the following indexing schemes S1–S5 enable an efficient solution for which of the queries Q1–Q4 (assuming normal backward chaining)?

@@ -207,7 +207,7 @@

U.S. census data listing the age, city of residence, date of birth, and mother of every person, using social security numbers as identifying constants for each person. Thus, George’s age is given by -${Age}(\mbox443-{65}-{1282}}, {56})$. Which of the following +${Age}(443-65-1282, 56)$. Which of the following indexing schemes S1–S5 enable an efficient solution for which of the queries Q1–Q4 (assuming normal backward chaining)?

diff --git a/_site/logical-inference-exercises/index.html b/_site/logical-inference-exercises/index.html index fa7e1debcb..783b35e3a7 100644 --- a/_site/logical-inference-exercises/index.html +++ b/_site/logical-inference-exercises/index.html @@ -532,7 +532,7 @@

9. Inference in First-Order Logic

U.S. census data listing the age, city of residence, date of birth, and mother of every person, using social security numbers as identifying constants for each person. Thus, George’s age is given by -${Age}(\mbox443-{65}-{1282}}, {56})$. Which of the following +${Age}(443-65-1282, 56)$. Which of the following indexing schemes S1–S5 enable an efficient solution for which of the queries Q1–Q4 (assuming normal backward chaining)?

diff --git a/_site/markdown/10-Classical-Planning/exercises/ex_14/question.md b/_site/markdown/10-Classical-Planning/exercises/ex_14/question.md index 7c1cad8e4f..0dbdbbadc0 100644 --- a/_site/markdown/10-Classical-Planning/exercises/ex_14/question.md +++ b/_site/markdown/10-Classical-Planning/exercises/ex_14/question.md @@ -1,7 +1,6 @@ -Examine the definition of **bidirectional -search** in Chapter search-chapter.
+Examine the definition of bidirectional search in Chapter search-chapter.
1. Would bidirectional state-space search be a good idea for planning?
diff --git a/_site/markdown/12-Knowledge-Representation/exercises/ex_25/question.md b/_site/markdown/12-Knowledge-Representation/exercises/ex_25/question.md index 726d7a9f5c..ce96be4e07 100644 --- a/_site/markdown/12-Knowledge-Representation/exercises/ex_25/question.md +++ b/_site/markdown/12-Knowledge-Representation/exercises/ex_25/question.md @@ -2,5 +2,5 @@ Translate the following description logic expression (from page description-logic-ex) into first-order logic, and comment on the result:
$$ -And(Man, AtLeast(3,Son), AtMost(2,Daughter),
All(Son,And(Unemployed,Married, All(Spouse,Doctor ))),
All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) +And(Man, AtLeast(3,Son), AtMost(2,Daughter), \\All(Son,And(Unemployed,Married, All(Spouse,Doctor ))), \\All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) $$ diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_13/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_13/question.md index a9a527564a..e271197dc8 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_13/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_13/question.md @@ -8,4 +8,4 @@ receiver if at most one bit in the entire message (including the parity bit) has been corrupted. Suppose we want to ensure that the correct message is received with probability at least $1-\delta$. What is the maximum feasible value of $n$? Calculate this value for the case -$\epsilon{{\,=\,}}0.001$, $\delta{{\,=\,}}0.01$. +$\epsilon = 0.001$, $\delta = 0.01$. diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_21/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_21/question.md index c60d483638..7300c9a30e 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_21/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_21/question.md @@ -1,6 +1,6 @@ Show that the statement of conditional independence -$${\textbf{P}}(X,Y {{\,|\,}}Z) = {\textbf{P}}(X{{\,|\,}}Z) {\textbf{P}}(Y{{\,|\,}}Z)$$ +$${\textbf{P}}(X,Y | Z) = {\textbf{P}}(X | Z) {\textbf{P}}(Y | Z)$$ is equivalent to each of the statements -$${\textbf{P}}(X{{\,|\,}}Y,Z) = {\textbf{P}}(X{{\,|\,}}Z) \quad\mbox{and}\quad {\textbf{P}}(Y{{\,|\,}}X,Z) = {\textbf{P}}(Y{{\,|\,}}Z)\ .$$ +$${\textbf{P}}(X | Y,Z) = {\textbf{P}}(X | Z) \quad\mbox{and}\quad {\textbf{P}}(Y | X,Z) = {\textbf{P}}(Y | Z)\ .$$ diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_4/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_4/question.md index 31a84983df..f9f7e5748c 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_4/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_4/question.md @@ -1,13 +1,13 @@ Would it be rational for an agent to hold the three beliefs -$P(A) {{\,=\,}}{0.4}$, $P(B) {{\,=\,}}{0.3}$, and -$P(A \lor B) {{\,=\,}}{0.5}$? If so, what range of probabilities would +$P(A) = 0.4$, $P(B) = 0.3$, and +$P(A \lor B) = 0.5$? If so, what range of probabilities would be rational for the agent to hold for $A \land B$? Make up a table like the one in Figure de-finetti-table, and show how it supports your argument about rationality. Then draw another version of the table where $P(A \lor B) -{{\,=\,}}{0.7}$. Explain why it is rational to have this probability, += 0.7$. Explain why it is rational to have this probability, even though the table shows one case that is a loss and three that just break even. (Hint: what is Agent 1 committed to about the probability of each of the four cases, especially the case that is a diff --git a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_16/question.md b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_16/question.md index ad74d44252..8a79b06cde 100644 --- a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_16/question.md +++ b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_16/question.md @@ -7,9 +7,9 @@ Consider the Bayes net shown in Figure politics-figure?
5. Suppose we want to add the variable - $P{{\,=\,}}{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.
politics-figure diff --git a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_17/question.md b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_17/question.md index bfa28edf6b..929310e705 100644 --- a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_17/question.md +++ b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_17/question.md @@ -7,9 +7,9 @@ Consider the Bayes net shown in Figure politics-figure?
5. Suppose we want to add the variable - $P{{\,=\,}}{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.
diff --git a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_4/question.md b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_4/question.md index 40b556d51b..93cc8a2b26 100644 --- a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_4/question.md +++ b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_4/question.md @@ -18,7 +18,7 @@ parents of $Y$, and all parents of $Y$ also become parents of $X$.
3. Let the parents of $X$ be $\textbf{U} \cup \textbf{V}$ and the parents of $Y$ be $\textbf{V} \cup \textbf{W}$, where $\textbf{U}$ and $\textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$\begin{aligned} - {\textbf{P}}(Y{{\,|\,}}\textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y{{\,|\,}}\textbf{V},\textbf{W}, x) {\textbf{P}}(x{{\,|\,}}\textbf{U}, \textbf{V}) \\ - {\textbf{P}}(X{{\,|\,}}\textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(Y{{\,|\,}}X, \textbf{V}, \textbf{W}) {\textbf{P}}(X{{\,|\,}}\textbf{U}, \textbf{V}) / {\textbf{P}}(Y{{\,|\,}}\textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ + {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y | \textbf{V},\textbf{W}, x) {\textbf{P}}(x | \textbf{U}, \textbf{V}) \\ + {\textbf{P}}(X | \textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(Y | X, \textbf{V}, \textbf{W}) {\textbf{P}}(X | \textbf{U}, \textbf{V}) / {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network. diff --git a/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_12/question.md b/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_12/question.md index dd8e38ca94..47b8f21c06 100644 --- a/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_12/question.md +++ b/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_12/question.md @@ -6,24 +6,24 @@ system whose behavior switches unpredictably among a set of $k$ distinct a series of distinct maneuvers that the missile may attempt to track. A Bayesian network representation of such a switching Kalman filter model is shown in -Figure switching-kf-figure. +Figure switching-kf-figure.

1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate - $${\textbf{P}}(\textbf{X}_0)$$ is a multivariate + ${\textbf{P}}(\textbf{X}_0)$ is a multivariate Gaussian distribution. Show that the prediction - $${\textbf{P}}(\textbf{X}_1)$$ is a mixture of + ${\textbf{P}}(\textbf{X}_1)$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such - that the weights sum to 1. + that the weights sum to 1.

2. Show that if the current continuous state estimate - $${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$$ is a mixture of $m$ Gaussians, + ${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate - $${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$$ will be a mixture of - $km$ Gaussians. + ${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$ will be a mixture of + $km$ Gaussians.

3. What aspect of the temporal process do the weights in the Gaussian - mixture represent? + mixture represent?

The results in (a) and (b) show that the representation of the posterior grows without limit even for switching Kalman filters, which are among diff --git a/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_17/question.md b/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_17/question.md index b9475cb885..3505c6a2c2 100644 --- a/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_17/question.md +++ b/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_17/question.md @@ -3,22 +3,16 @@ For the DBN specified in Exercise sleep1-exercise and for the evidence values
-$$ -\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class -$$ +$\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class$
perform the following computations:
-1. State estimation: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:t})$$ for each +1. State estimation: Compute $P({EnoughSleep}_t | \textbf{e}_{1:t})$ for each of $t = 1,2,3$.
-2. Smoothing: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:3})$$ for each of +2. Smoothing: Compute $P({EnoughSleep}_t | \textbf{e}_{1:3})$ for each of $t = 1,2,3$.
3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.
diff --git a/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_3/question.md b/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_3/question.md index 2308700c38..c5cd752183 100644 --- a/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_3/question.md +++ b/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_3/question.md @@ -3,16 +3,16 @@ This exercise develops a space-efficient variant of the forward–backward algorithm described in Figure forward-backward-algorithm (page forward-backward-algorithm). -We wish to compute $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$$ for -$$k=1,\ldots ,t$$. This will be done with a divide-and-conquer +We wish to compute $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$ for +$k=1,\ldots ,t$. This will be done with a divide-and-conquer approach.
1. Suppose, for simplicity, that $t$ is odd, and let the halfway point - be $h=(t+1)/2$. Show that $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $$ + be $h=(t+1)/2$. Show that $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $ can be computed for $k=1,\ldots ,h$ given just the initial forward message - $$\textbf{f}_{1:0}$$, the backward message $$\textbf{b}_{h+1:t}$$, and the evidence - $$\textbf{e}_{1:h}$$.
+ $\textbf{f}_{1:0}$, the backward message $\textbf{b}_{h+1:t}$, and the evidence + $\textbf{e}_{1:h}$.
2. Show a similar result for the second half of the sequence.
diff --git a/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_7/question.md b/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_7/question.md index f21e391735..ea3e5d151f 100644 --- a/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_7/question.md +++ b/_site/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_7/question.md @@ -5,7 +5,7 @@ distribution over locations is uniform and the transition model assumes an equal probability of moving to any neighboring square. What if those assumptions are wrong? Suppose that the initial location is actually chosen uniformly from the northwest quadrant of the room and the action -actually tends to move southeast\[hmm-robot-southeast-page\]. Keeping +actually tends to move southeast. Keeping the HMM model fixed, explore the effect on localization and path accuracy as the southeasterly tendency increases, for different values of $\epsilon$. diff --git a/_site/markdown/16-Making-Simple-Decisions/exercises/ex_14/question.md b/_site/markdown/16-Making-Simple-Decisions/exercises/ex_14/question.md index 963aad0725..37bd606b49 100644 --- a/_site/markdown/16-Making-Simple-Decisions/exercises/ex_14/question.md +++ b/_site/markdown/16-Making-Simple-Decisions/exercises/ex_14/question.md @@ -8,16 +8,16 @@ value (EMV) versus some certain payoff. As $R$ (which is measured in the same units as $x$) becomes larger, the individual becomes less risk-averse.
-1. Assume Mary has an exponential utility function with $$R = \$500$$. - Mary is given the choice between receiving $$\$500$$ with certainty +1. Assume Mary has an exponential utility function with $R = \$500$. + Mary is given the choice between receiving $\$500$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% - probability of winning $$\$500$$ and a 50% probability of winning + probability of winning $\$500$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to diff --git a/_site/markdown/16-Making-Simple-Decisions/exercises/ex_15/question.md b/_site/markdown/16-Making-Simple-Decisions/exercises/ex_15/question.md index 1f3a249fed..027610f8a6 100644 --- a/_site/markdown/16-Making-Simple-Decisions/exercises/ex_15/question.md +++ b/_site/markdown/16-Making-Simple-Decisions/exercises/ex_15/question.md @@ -9,13 +9,13 @@ same units as $x$) becomes larger, the individual becomes less risk-averse.
1. Assume Mary has an exponential utility function with $R = \$400$. - Mary is given the choice between receiving $$\$400$$ with certainty + Mary is given the choice between receiving $\$400$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning \$500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an diff --git a/_site/markdown/16-Making-Simple-Decisions/exercises/ex_16/question.md b/_site/markdown/16-Making-Simple-Decisions/exercises/ex_16/question.md index 429f76c1c5..a5b3e29879 100644 --- a/_site/markdown/16-Making-Simple-Decisions/exercises/ex_16/question.md +++ b/_site/markdown/16-Making-Simple-Decisions/exercises/ex_16/question.md @@ -1,9 +1,9 @@ Alex is given the choice between two games. In Game 1, a fair coin is -flipped and if it comes up heads, Alex receives $$\$100$$. If the coin comes +flipped and if it comes up heads, Alex receives $\$100$. If the coin comes up tails, Alex receives nothing. In Game 2, a fair coin is flipped -twice. Each time the coin comes up heads, Alex receives $$\$50$$, and Alex +twice. Each time the coin comes up heads, Alex receives $\$50$, and Alex receives nothing for each coin flip that comes up tails. Assuming that Alex has a monotonically increasing utility function for money in the range \[\$0, \$100\], show mathematically that if Alex prefers Game 2 to diff --git a/_site/markdown/16-Making-Simple-Decisions/exercises/ex_22/question.md b/_site/markdown/16-Making-Simple-Decisions/exercises/ex_22/question.md index 7d87d67331..b6213f9610 100644 --- a/_site/markdown/16-Making-Simple-Decisions/exercises/ex_22/question.md +++ b/_site/markdown/16-Making-Simple-Decisions/exercises/ex_22/question.md @@ -8,10 +8,10 @@ assume that the buyer is deciding whether to buy car $c_1$, that there is time to carry out at most one test, and that $t_1$ is the test of $c_1$ and costs \$50.
-A car can be in good shape (quality $$q^+$$) or bad shape (quality $q^-$), +A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$), and the tests might help indicate what shape the car is in. Car $c_1$ -costs \$1,500, and its market value is $$\$2,000$$ if it is in good shape; if -not, $$\$700$$ in repairs will be needed to make it in good shape. The buyer’s +costs \$1,500, and its market value is $\$2,000$ if it is in good shape; if +not, $\$700$ in repairs will be needed to make it in good shape. The buyer’s estimate is that $c_1$ has a 70% chance of being in good shape.
1. Draw the decision network that represents this problem.
@@ -22,9 +22,9 @@ estimate is that $c_1$ has a 70% chance of being in good shape.
fail the test given that the car is in good or bad shape. We have the following information:
- $$P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$$
+ $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$
- $$P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$$
+ $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$
Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.
diff --git a/_site/markdown/17-Making-Complex-Decision/exercises/ex_22/question.md b/_site/markdown/17-Making-Complex-Decision/exercises/ex_22/question.md index 387c1e128c..77bdb05664 100644 --- a/_site/markdown/17-Making-Complex-Decision/exercises/ex_22/question.md +++ b/_site/markdown/17-Making-Complex-Decision/exercises/ex_22/question.md @@ -3,11 +3,17 @@ The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game between politicians and the Federal Reserve.
-| | Fed: contract | Fed: do nothing | Fed: expand | -| --- | --- | --- | --- | -| **Pol: contract** | $F=7, P=1$ | $F=9,P=4$ | $F=6,P=6$ | -| **Pol: do nothing** | $F=8, P=2$ | $F=5,P=5$ | $F=4,P=9$ | -| **Pol: expand** | $F=3, P=3$ | $F=2,P=7$ | $F=1,P=8$ | +$$ +\begin{array} + {|r|r|}\hline & Fed: contract & Fed: do nothing & Fed: expand \\ + \hline + Pol: contract & F=7, P=1 & F=9, P=4 & F=6, P=6 \\ + Pol: do nothing & F=8, P=2 & F=5, P=5 & F=4, P=9 \\ + Pol: expand & F=3, P=3 & F=2, P=7 & F=1, P=8\\ + \hline +\end{array} +$$ +
Politicians can expand or contract fiscal policy, while the Fed can expand or contract monetary policy. (And of course either side can diff --git a/_site/markdown/18-Learning-From-Examples/exercises/ex_16/question.md b/_site/markdown/18-Learning-From-Examples/exercises/ex_16/question.md index 35aa26ec95..63d2b9f6dd 100644 --- a/_site/markdown/18-Learning-From-Examples/exercises/ex_16/question.md +++ b/_site/markdown/18-Learning-From-Examples/exercises/ex_16/question.md @@ -8,14 +8,17 @@ correctly. If multiple tests have the same number of attributes and classify the same number of examples, then break the tie using attributes with lower index numbers (e.g., select $A_1$ over $A_2$).
- -| | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad A_y\quad$ | $\quad y\quad$ | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | 1 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 1 | 1 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 1 | 0 | 0 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | 1 | -| $\textbf{x}_6$ | 0 | 1 | 0 | 1 | 0 | -| $\textbf{x}_7$ | 0 | 0 | 1 | 1 | 1 | -| $\textbf{x}_8$ | 0 | 0 | 1 | 0 | 0 | +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 & 1 \\ + \textbf{x}_2 & 1 & 0 & 1 & 1 & 1 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 & 1 \\ + \textbf{x}_4 & 0 & 1 & 1 & 0 & 0 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 & 1 \\ + \textbf{x}_6 & 0 & 1 & 0 & 1 & 0 \\ + \textbf{x}_7 & 0 & 0 & 1 & 1 & 1 \\ + \textbf{x}_8 & 0 & 0 & 1 & 0 & 0 \\ + \hline +\end{array} +$$ diff --git a/_site/markdown/18-Learning-From-Examples/exercises/ex_21/question.md b/_site/markdown/18-Learning-From-Examples/exercises/ex_21/question.md index f9ce3466b3..52983445f9 100644 --- a/_site/markdown/18-Learning-From-Examples/exercises/ex_21/question.md +++ b/_site/markdown/18-Learning-From-Examples/exercises/ex_21/question.md @@ -1,6 +1,6 @@ -Figure kernel-machine-figure +Figure kernel-machine-figure showed how a circle at the origin can be linearly separated by mapping from the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$. But what if the circle is not located at the origin? What if it is an diff --git a/_site/markdown/18-Learning-From-Examples/exercises/ex_27/question.md b/_site/markdown/18-Learning-From-Examples/exercises/ex_27/question.md index b661f00cbc..4f1152850d 100644 --- a/_site/markdown/18-Learning-From-Examples/exercises/ex_27/question.md +++ b/_site/markdown/18-Learning-From-Examples/exercises/ex_27/question.md @@ -3,15 +3,23 @@ Consider the following set of examples, each with six inputs and one target output:
-| | | | | | | | | | | | | | | | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | -| $\textbf{x}_3$ | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | -| $\textbf{x}_5$ | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | -| $\textbf{x}_6$ | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | -| $\textbf{T}$ | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | + + +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & A_5 & A_6 & A_7 & A_8 & A_9 & A_{10} & A_{11} & A_{12} & A_{13} & A_{14} \\ + \hline + \textbf{x}_1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \textbf{x}_2 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 \\ + \textbf{x}_3 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ + \textbf{x}_4 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\ + \textbf{x}_5 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ + \textbf{x}_6 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ + \textbf{T} & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \hline +\end{array} +$$ + 1. Run the perceptron learning rule on these data and show the diff --git a/_site/markdown/18-Learning-From-Examples/exercises/ex_7/question.md b/_site/markdown/18-Learning-From-Examples/exercises/ex_7/question.md index d1ccfb1a88..f202272a06 100644 --- a/_site/markdown/18-Learning-From-Examples/exercises/ex_7/question.md +++ b/_site/markdown/18-Learning-From-Examples/exercises/ex_7/question.md @@ -1,6 +1,6 @@ -\[nonnegative-gain-exercise\]Suppose that an attribute splits the set of +Suppose that an attribute splits the set of examples $E$ into subsets $E_k$ and that each subset has $p_k$ positive examples and $n_k$ negative examples. Show that the attribute has strictly positive information gain unless the ratio diff --git a/_site/markdown/18-Learning-From-Examples/exercises/ex_8/question.md b/_site/markdown/18-Learning-From-Examples/exercises/ex_8/question.md index 6323f57807..45dbba943c 100644 --- a/_site/markdown/18-Learning-From-Examples/exercises/ex_8/question.md +++ b/_site/markdown/18-Learning-From-Examples/exercises/ex_8/question.md @@ -3,15 +3,17 @@ Consider the following data set comprised of three binary input attributes ($A_1, A_2$, and $A_3$) and one binary output:
-| $\quad \textbf{Example}$ | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad Output\space y$ | -| --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 0 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | -| $\textbf{x}_4$ | 1 | 1 | 1 | 1 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | - - +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & Output\space y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 \\ + \textbf{x}_2 & 1 & 0 & 1 & 0 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 \\ + \textbf{x}_4 & 1 & 1 & 1 & 1 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 \\ + \hline +\end{array} +$$ Use the algorithm in Figure DTL-algorithm (page DTL-algorithm) to learn a decision tree for these data. Show the computations made to determine the attribute to split at each node. diff --git a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_12/question.md b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_12/question.md index de6337898d..650166a140 100644 --- a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_12/question.md +++ b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_12/question.md @@ -4,9 +4,9 @@ We can turn the navigation problem in Exercise path-planning-exercise into an environment as follows:
-- The percept will be a list of the positions, *relative to the - agent*, of the visible vertices. The percept does - *not* include the position of the robot! The robot must +- The percept will be a list of the positions, relative to the + agent, of the visible vertices. The percept does + not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”
@@ -15,7 +15,7 @@ follows:
otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment - teleports the agent to a *random location* (not inside + teleports the agent to a random location (not inside an obstacle).
- The performance measure charges the agent 1 point for each unit of diff --git a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_13/question.md b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_13/question.md index 1cd55f8a00..861069f949 100644 --- a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_13/question.md +++ b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_13/question.md @@ -4,8 +4,8 @@ Suppose that an agent is in a $3 \times 3$ maze environment like the one shown in Figure maze-3x3-figure. The agent knows that its initial location is (1,1), that the goal is at (3,3), and that the -actions *Up*, *Down*, *Left*, *Right* have their usual -effects unless blocked by a wall. The agent does *not* know +actions Up, Down, Left, Right have their usual +effects unless blocked by a wall. The agent does not know where the internal walls are. In any given state, the agent perceives the set of legal actions; it can also tell whether the state is one it has visited before.
@@ -20,7 +20,7 @@ has visited before.
3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?
-Notice that this contingency plan is a solution for *every -possible environment* fitting the given description. Therefore, +Notice that this contingency plan is a solution for every +possible environment fitting the given description. Therefore, interleaving of search and execution is not strictly necessary even in unknown environments. diff --git a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_5/question.md b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_5/question.md index 7682a672a6..796de94938 100644 --- a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_5/question.md +++ b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_5/question.md @@ -1,10 +1,10 @@ -The **And-Or-Graph-Search** algorithm in +The And-Or-Graph-Search algorithm in Figure and-or-graph-search-algorithm checks for repeated states only on the path from the root to the current state. Suppose that, in addition, the algorithm were to store -*every* visited state and check against that list. (See in +every visited state and check against that list. (See in Figure breadth-first-search-algorithm for an example.) Determine the information that should be stored and how the algorithm should use that information when a repeated state is found. diff --git a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_6/question.md b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_6/question.md index b5f1f2fad8..f84f568192 100644 --- a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_6/question.md +++ b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_6/question.md @@ -1,9 +1,9 @@ -Explain precisely how to modify the **And-Or-Graph-Search** algorithm to +Explain precisely how to modify the And-Or-Graph-Search algorithm to generate a cyclic plan if no acyclic plan exists. You will need to deal with three issues: labeling the plan steps so that a cyclic plan can -point back to an earlier part of the plan, modifying **Or-Search** so that it +point back to an earlier part of the plan, modifying Or-Search so that it continues to look for acyclic plans after finding a cyclic plan, and augmenting the plan representation to indicate whether a plan is cyclic. Show how your algorithm works on (a) the slippery vacuum world, and (b) diff --git a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_9/question.md b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_9/question.md index 53af6ded44..507ab4f07a 100644 --- a/_site/markdown/4-Beyond-Classical-Search/exercises/ex_9/question.md +++ b/_site/markdown/4-Beyond-Classical-Search/exercises/ex_9/question.md @@ -7,8 +7,8 @@ what happens when the assumption does not hold. Does the notion of optimality still make sense in this context, or does it require modification? Consider also various possible definitions of the “cost” of executing an action in a belief state; for example, we could use the -*minimum* of the physical costs; or the -*maximum*; or a cost *interval* with the lower +minimum of the physical costs; or the +maximum; or a cost interval with the lower bound being the minimum cost and the upper bound being the maximum; or just keep the set of all possible costs for that action. For each of these, explore whether A* (with modifications if necessary) can return diff --git a/_site/markdown/6-Constraint-Satisfaction-Problems/exercises/ex_14/question.md b/_site/markdown/6-Constraint-Satisfaction-Problems/exercises/ex_14/question.md index b8524e2a43..18efd18990 100644 --- a/_site/markdown/6-Constraint-Satisfaction-Problems/exercises/ex_14/question.md +++ b/_site/markdown/6-Constraint-Satisfaction-Problems/exercises/ex_14/question.md @@ -1,7 +1,7 @@ -AC-3 puts back on the queue *every* arc -($X_{k}, X_{i}$) whenever *any* value is deleted from the +AC-3 puts back on the queue every arc +($X_{k}, X_{i}$) whenever any value is deleted from the domain of $X_{i}$, even if each value of $X_{k}$ is consistent with several remaining values of $X_{i}$. Suppose that, for every arc ($X_{k}, X_{i}$), we keep track of the number of remaining values of diff --git a/_site/markdown/8-First-Order-Logic/exercises/ex_1/question.md b/_site/markdown/8-First-Order-Logic/exercises/ex_1/question.md index 720453f7d5..af9cbf49bf 100644 --- a/_site/markdown/8-First-Order-Logic/exercises/ex_1/question.md +++ b/_site/markdown/8-First-Order-Logic/exercises/ex_1/question.md @@ -9,14 +9,14 @@ about the country—it represents facts with a map language. The two-dimensional structure of the map corresponds to the two-dimensional surface of the area.
-1. Give five examples of *symbols* in the map language.
+1. Give five examples of symbols in the map language.
-2. An *explicit* sentence is a sentence that the creator +2. An explicit sentence is a sentence that the creator of the representation actually writes down. An - *implicit* sentence is a sentence that results from + implicit sentence is a sentence that results from explicit sentences because of properties of the analogical - representation. Give three examples each of *implicit* - and *explicit* sentences in the map language.
+ representation. Give three examples each of implicit + and explicit sentences in the map language.
3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.
diff --git a/_site/markdown/9-Inference-In-First-Order-Logic/exercises/ex_14/question.md b/_site/markdown/9-Inference-In-First-Order-Logic/exercises/ex_14/question.md index 1676b73d05..e213c2d19e 100644 --- a/_site/markdown/9-Inference-In-First-Order-Logic/exercises/ex_14/question.md +++ b/_site/markdown/9-Inference-In-First-Order-Logic/exercises/ex_14/question.md @@ -4,7 +4,7 @@ Suppose we put into a logical knowledge base a segment of the U.S. census data listing the age, city of residence, date of birth, and mother of every person, using social security numbers as identifying constants for each person. Thus, George’s age is given by -${Age}(\mbox{{443}}-{65}-{1282}}, {56})$. Which of the following +${Age}(443-65-1282, 56)$. Which of the following indexing schemes S1–S5 enable an efficient solution for which of the queries Q1–Q4 (assuming normal backward chaining)?

diff --git a/_site/planning-exercises/ex_14/index.html b/_site/planning-exercises/ex_14/index.html index 02c10d5bc9..09d5c4c030 100644 --- a/_site/planning-exercises/ex_14/index.html +++ b/_site/planning-exercises/ex_14/index.html @@ -166,8 +166,7 @@

-Examine the definition of **bidirectional -search** in Chapter search-chapter.
+Examine the definition of bidirectional search in Chapter search-chapter.
1. Would bidirectional state-space search be a good idea for planning?
@@ -198,8 +197,7 @@

-Examine the definition of **bidirectional -search** in Chapter search-chapter.
+Examine the definition of bidirectional search in Chapter search-chapter.
1. Would bidirectional state-space search be a good idea for planning?
diff --git a/_site/planning-exercises/index.html b/_site/planning-exercises/index.html index 115217f2ae..0656a7e773 100644 --- a/_site/planning-exercises/index.html +++ b/_site/planning-exercises/index.html @@ -466,8 +466,7 @@

10. Classical Planning

-Examine the definition of **bidirectional -search** in Chapter search-chapter.
+Examine the definition of bidirectional search in Chapter search-chapter.
1. Would bidirectional state-space search be a good idea for planning?
diff --git a/_site/probability-exercises/ex_13/index.html b/_site/probability-exercises/ex_13/index.html index efbce69928..cfa265418b 100644 --- a/_site/probability-exercises/ex_13/index.html +++ b/_site/probability-exercises/ex_13/index.html @@ -174,7 +174,7 @@

bit) has been corrupted. Suppose we want to ensure that the correct message is received with probability at least $1-\delta$. What is the maximum feasible value of $n$? Calculate this value for the case -$\epsilon0.001$, $\delta0.01$. +$\epsilon = 0.001$, $\delta = 0.01$.

@@ -203,7 +203,7 @@

bit) has been corrupted. Suppose we want to ensure that the correct message is received with probability at least $1-\delta$. What is the maximum feasible value of $n$? Calculate this value for the case -$\epsilon0.001$, $\delta0.01$. +$\epsilon = 0.001$, $\delta = 0.01$.

diff --git a/_site/probability-exercises/ex_21/index.html b/_site/probability-exercises/ex_21/index.html index fcd91726df..ebb0af7b46 100644 --- a/_site/probability-exercises/ex_21/index.html +++ b/_site/probability-exercises/ex_21/index.html @@ -167,9 +167,9 @@

Show that the statement of conditional independence -$${\textbf{P}}(X,Y Z) = {\textbf{P}}(XZ) {\textbf{P}}(YZ)$$ +$${\textbf{P}}(X,Y | Z) = {\textbf{P}}(X | Z) {\textbf{P}}(Y | Z)$$ is equivalent to each of the statements -$${\textbf{P}}(XY,Z) = {\textbf{P}}(XZ) \quad\mbox{and}\quad {\textbf{P}}(YX,Z) = {\textbf{P}}(YZ)\ .$$ +$${\textbf{P}}(X | Y,Z) = {\textbf{P}}(X | Z) \quad\mbox{and}\quad {\textbf{P}}(Y | X,Z) = {\textbf{P}}(Y | Z)\ .$$

@@ -191,9 +191,9 @@

Show that the statement of conditional independence -$${\textbf{P}}(X,Y Z) = {\textbf{P}}(XZ) {\textbf{P}}(YZ)$$ +$${\textbf{P}}(X,Y | Z) = {\textbf{P}}(X | Z) {\textbf{P}}(Y | Z)$$ is equivalent to each of the statements -$${\textbf{P}}(XY,Z) = {\textbf{P}}(XZ) \quad\mbox{and}\quad {\textbf{P}}(YX,Z) = {\textbf{P}}(YZ)\ .$$ +$${\textbf{P}}(X | Y,Z) = {\textbf{P}}(X | Z) \quad\mbox{and}\quad {\textbf{P}}(Y | X,Z) = {\textbf{P}}(Y | Z)\ .$$

diff --git a/_site/probability-exercises/ex_4/index.html b/_site/probability-exercises/ex_4/index.html index 2c50445c20..41c5059271 100644 --- a/_site/probability-exercises/ex_4/index.html +++ b/_site/probability-exercises/ex_4/index.html @@ -167,13 +167,13 @@

Would it be rational for an agent to hold the three beliefs -$P(A) {0.4}$, $P(B) {0.3}$, and -$P(A \lor B) {0.5}$? If so, what range of probabilities would +$P(A) = 0.4$, $P(B) = 0.3$, and +$P(A \lor B) = 0.5$? If so, what range of probabilities would be rational for the agent to hold for $A \land B$? Make up a table like the one in Figure de-finetti-table, and show how it supports your argument about rationality. Then draw another version of the table where $P(A \lor B) -{0.7}$. Explain why it is rational to have this probability, += 0.7$. Explain why it is rational to have this probability, even though the table shows one case that is a loss and three that just break even. (Hint: what is Agent 1 committed to about the probability of each of the four cases, especially the case that is a @@ -199,13 +199,13 @@

Would it be rational for an agent to hold the three beliefs -$P(A) {0.4}$, $P(B) {0.3}$, and -$P(A \lor B) {0.5}$? If so, what range of probabilities would +$P(A) = 0.4$, $P(B) = 0.3$, and +$P(A \lor B) = 0.5$? If so, what range of probabilities would be rational for the agent to hold for $A \land B$? Make up a table like the one in Figure de-finetti-table, and show how it supports your argument about rationality. Then draw another version of the table where $P(A \lor B) -{0.7}$. Explain why it is rational to have this probability, += 0.7$. Explain why it is rational to have this probability, even though the table shows one case that is a loss and three that just break even. (Hint: what is Agent 1 committed to about the probability of each of the four cases, especially the case that is a diff --git a/_site/probability-exercises/index.html b/_site/probability-exercises/index.html index 00b8cd0a78..1c41072e90 100644 --- a/_site/probability-exercises/index.html +++ b/_site/probability-exercises/index.html @@ -216,13 +216,13 @@

13. Quantifying Uncertainity

Would it be rational for an agent to hold the three beliefs -$P(A) {0.4}$, $P(B) {0.3}$, and -$P(A \lor B) {0.5}$? If so, what range of probabilities would +$P(A) = 0.4$, $P(B) = 0.3$, and +$P(A \lor B) = 0.5$? If so, what range of probabilities would be rational for the agent to hold for $A \land B$? Make up a table like the one in Figure de-finetti-table, and show how it supports your argument about rationality. Then draw another version of the table where $P(A \lor B) -{0.7}$. Explain why it is rational to have this probability, += 0.7$. Explain why it is rational to have this probability, even though the table shows one case that is a loss and three that just break even. (Hint: what is Agent 1 committed to about the probability of each of the four cases, especially the case that is a @@ -474,7 +474,7 @@

13. Quantifying Uncertainity

bit) has been corrupted. Suppose we want to ensure that the correct message is received with probability at least $1-\delta$. What is the maximum feasible value of $n$? Calculate this value for the case -$\epsilon0.001$, $\delta0.01$. +$\epsilon = 0.001$, $\delta = 0.01$.

@@ -635,9 +635,9 @@

13. Quantifying Uncertainity

Show that the statement of conditional independence -$${\textbf{P}}(X,Y Z) = {\textbf{P}}(XZ) {\textbf{P}}(YZ)$$ +$${\textbf{P}}(X,Y | Z) = {\textbf{P}}(X | Z) {\textbf{P}}(Y | Z)$$ is equivalent to each of the statements -$${\textbf{P}}(XY,Z) = {\textbf{P}}(XZ) \quad\mbox{and}\quad {\textbf{P}}(YX,Z) = {\textbf{P}}(YZ)\ .$$ +$${\textbf{P}}(X | Y,Z) = {\textbf{P}}(X | Z) \quad\mbox{and}\quad {\textbf{P}}(Y | X,Z) = {\textbf{P}}(Y | Z)\ .$$

diff --git a/_site/question_bank/index.html b/_site/question_bank/index.html index 6cdcc88913..63a7f781de 100644 --- a/_site/question_bank/index.html +++ b/_site/question_bank/index.html @@ -2030,11 +2030,11 @@


-The **And-Or-Graph-Search** algorithm in +The And-Or-Graph-Search algorithm in Figure and-or-graph-search-algorithm checks for repeated states only on the path from the root to the current state. Suppose that, in addition, the algorithm were to store -*every* visited state and check against that list. (See in +every visited state and check against that list. (See in Figure breadth-first-search-algorithm for an example.) Determine the information that should be stored and how the algorithm should use that information when a repeated state is found. @@ -2058,10 +2058,10 @@


-Explain precisely how to modify the **And-Or-Graph-Search** algorithm to +Explain precisely how to modify the And-Or-Graph-Search algorithm to generate a cyclic plan if no acyclic plan exists. You will need to deal with three issues: labeling the plan steps so that a cyclic plan can -point back to an earlier part of the plan, modifying **Or-Search** so that it +point back to an earlier part of the plan, modifying Or-Search so that it continues to look for acyclic plans after finding a cyclic plan, and augmenting the plan representation to indicate whether a plan is cyclic. Show how your algorithm works on (a) the slippery vacuum world, and (b) @@ -2139,8 +2139,8 @@


optimality still make sense in this context, or does it require modification? Consider also various possible definitions of the “cost” of executing an action in a belief state; for example, we could use the -*minimum* of the physical costs; or the -*maximum*; or a cost *interval* with the lower +minimum of the physical costs; or the +maximum; or a cost interval with the lower bound being the minimum cost and the upper bound being the maximum; or just keep the set of all possible costs for that action. For each of these, explore whether A* (with modifications if necessary) can return @@ -2202,9 +2202,9 @@


Exercise path-planning-exercise into an environment as follows:
-- The percept will be a list of the positions, *relative to the - agent*, of the visible vertices. The percept does - *not* include the position of the robot! The robot must +- The percept will be a list of the positions, relative to the + agent, of the visible vertices. The percept does + not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”
@@ -2213,7 +2213,7 @@


otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment - teleports the agent to a *random location* (not inside + teleports the agent to a random location (not inside an obstacle).
- The performance measure charges the agent 1 point for each unit of @@ -2267,8 +2267,8 @@


maze environment like the one shown in Figure maze-3x3-figure. The agent knows that its initial location is (1,1), that the goal is at (3,3), and that the -actions *Up*, *Down*, *Left*, *Right* have their usual -effects unless blocked by a wall. The agent does *not* know +actions Up, Down, Left, Right have their usual +effects unless blocked by a wall. The agent does not know where the internal walls are. In any given state, the agent perceives the set of legal actions; it can also tell whether the state is one it has visited before.
@@ -2283,8 +2283,8 @@


3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?
-Notice that this contingency plan is a solution for *every -possible environment* fitting the given description. Therefore, +Notice that this contingency plan is a solution for every +possible environment fitting the given description. Therefore, interleaving of search and execution is not strictly necessary even in unknown environments.

@@ -3482,8 +3482,8 @@


-AC-3 puts back on the queue *every* arc -($X_{k}, X_{i}$) whenever *any* value is deleted from the +AC-3 puts back on the queue every arc +($X_{k}, X_{i}$) whenever any value is deleted from the domain of $X_{i}$, even if each value of $X_{k}$ is consistent with several remaining values of $X_{i}$. Suppose that, for every arc ($X_{k}, X_{i}$), we keep track of the number of remaining values of @@ -4623,14 +4623,14 @@


two-dimensional structure of the map corresponds to the two-dimensional surface of the area.
-1. Give five examples of *symbols* in the map language.
+1. Give five examples of symbols in the map language.
-2. An *explicit* sentence is a sentence that the creator +2. An explicit sentence is a sentence that the creator of the representation actually writes down. An - *implicit* sentence is a sentence that results from + implicit sentence is a sentence that results from explicit sentences because of properties of the analogical - representation. Give three examples each of *implicit* - and *explicit* sentences in the map language.
+ representation. Give three examples each of implicit + and explicit sentences in the map language.
3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.
@@ -6043,7 +6043,7 @@


U.S. census data listing the age, city of residence, date of birth, and mother of every person, using social security numbers as identifying constants for each person. Thus, George’s age is given by -${Age}(\mbox443-{65}-{1282}}, {56})$. Which of the following +${Age}(443-65-1282, 56)$. Which of the following indexing schemes S1–S5 enable an efficient solution for which of the queries Q1–Q4 (assuming normal backward chaining)?

@@ -6838,8 +6838,7 @@


-Examine the definition of **bidirectional -search** in Chapter search-chapter.
+Examine the definition of bidirectional search in Chapter search-chapter.
1. Would bidirectional state-space search be a good idea for planning?
@@ -7939,7 +7938,7 @@


page description-logic-ex) into first-order logic, and comment on the result:
$$ -And(Man, AtLeast(3,Son), AtMost(2,Daughter),
All(Son,And(Unemployed,Married, All(Spouse,Doctor ))),
All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) +And(Man, AtLeast(3,Son), AtMost(2,Daughter), \\All(Son,And(Unemployed,Married, All(Spouse,Doctor ))), \\All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) $$

@@ -8149,13 +8148,13 @@


Would it be rational for an agent to hold the three beliefs -$P(A) {0.4}$, $P(B) {0.3}$, and -$P(A \lor B) {0.5}$? If so, what range of probabilities would +$P(A) = 0.4$, $P(B) = 0.3$, and +$P(A \lor B) = 0.5$? If so, what range of probabilities would be rational for the agent to hold for $A \land B$? Make up a table like the one in Figure de-finetti-table, and show how it supports your argument about rationality. Then draw another version of the table where $P(A \lor B) -{0.7}$. Explain why it is rational to have this probability, += 0.7$. Explain why it is rational to have this probability, even though the table shows one case that is a loss and three that just break even. (Hint: what is Agent 1 committed to about the probability of each of the four cases, especially the case that is a @@ -8407,7 +8406,7 @@


bit) has been corrupted. Suppose we want to ensure that the correct message is received with probability at least $1-\delta$. What is the maximum feasible value of $n$? Calculate this value for the case -$\epsilon0.001$, $\delta0.01$. +$\epsilon = 0.001$, $\delta = 0.01$.

@@ -8568,9 +8567,9 @@


Show that the statement of conditional independence -$${\textbf{P}}(X,Y Z) = {\textbf{P}}(XZ) {\textbf{P}}(YZ)$$ +$${\textbf{P}}(X,Y | Z) = {\textbf{P}}(X | Z) {\textbf{P}}(Y | Z)$$ is equivalent to each of the statements -$${\textbf{P}}(XY,Z) = {\textbf{P}}(XZ) \quad\mbox{and}\quad {\textbf{P}}(YX,Z) = {\textbf{P}}(YZ)\ .$$ +$${\textbf{P}}(X | Y,Z) = {\textbf{P}}(X | Z) \quad\mbox{and}\quad {\textbf{P}}(Y | X,Z) = {\textbf{P}}(Y | Z)\ .$$

@@ -8919,8 +8918,8 @@


3. Let the parents of $X$ be $\textbf{U} \cup \textbf{V}$ and the parents of $Y$ be $\textbf{V} \cup \textbf{W}$, where $\textbf{U}$ and $\textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$\begin{aligned} - {\textbf{P}}(Y\textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y\textbf{V},\textbf{W}, x) {\textbf{P}}(x\textbf{U}, \textbf{V}) \\ - {\textbf{P}}(X\textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(YX, \textbf{V}, \textbf{W}) {\textbf{P}}(X\textbf{U}, \textbf{V}) / {\textbf{P}}(Y\textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ + {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y | \textbf{V},\textbf{W}, x) {\textbf{P}}(x | \textbf{U}, \textbf{V}) \\ + {\textbf{P}}(X | \textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(Y | X, \textbf{V}, \textbf{W}) {\textbf{P}}(X | \textbf{U}, \textbf{V}) / {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network.

@@ -9274,9 +9273,9 @@


1. ${\textbf{P}}(B,I,M) = {\textbf{P}}(B){\textbf{P}}(I){\textbf{P}}(M)$.
- 2. ${\textbf{P}}(JG) = {\textbf{P}}(JG,I)$.
+ 2. ${\textbf{P}}(J|G) = {\textbf{P}}(J|G,I)$.
- 3. ${\textbf{P}}(MG,B,I) = {\textbf{P}}(MG,B,I,J)$.
+ 3. ${\textbf{P}}(M|G,B,I) = {\textbf{P}}(M|G,B,I,J)$.
2. Calculate the value of $P(b,i,\lnot m,g,j)$.
@@ -9292,7 +9291,7 @@


Figure politics-figure?
5. Suppose we want to add the variable - $P{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.
politics-figure @@ -9318,9 +9317,9 @@


1. ${\textbf{P}}(B,I,M) = {\textbf{P}}(B){\textbf{P}}(I){\textbf{P}}(M)$.
- 2. ${\textbf{P}}(JG) = {\textbf{P}}(JG,I)$.
+ 2. ${\textbf{P}}(J|G) = {\textbf{P}}(J|G,I)$.
- 3. ${\textbf{P}}(MG,B,I) = {\textbf{P}}(MG,B,I,J)$.
+ 3. ${\textbf{P}}(M|G,B,I) = {\textbf{P}}(M|G,B,I,J)$.
2. Calculate the value of $P(b,i,\lnot m,g,j)$.
@@ -9336,7 +9335,7 @@


Figure politics-figure?
5. Suppose we want to add the variable - $P{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.

@@ -9626,16 +9625,16 @@


This exercise develops a space-efficient variant of the forward–backward algorithm described in Figure forward-backward-algorithm (page forward-backward-algorithm). -We wish to compute $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$$ for -$$k=1,\ldots ,t$$. This will be done with a divide-and-conquer +We wish to compute $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$ for +$k=1,\ldots ,t$. This will be done with a divide-and-conquer approach.
1. Suppose, for simplicity, that $t$ is odd, and let the halfway point - be $h=(t+1)/2$. Show that $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $$ + be $h=(t+1)/2$. Show that $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $ can be computed for $k=1,\ldots ,h$ given just the initial forward message - $$\textbf{f}_{1:0}$$, the backward message $$\textbf{b}_{h+1:t}$$, and the evidence - $$\textbf{e}_{1:h}$$.
+ $\textbf{f}_{1:0}$, the backward message $\textbf{b}_{h+1:t}$, and the evidence + $\textbf{e}_{1:h}$.
2. Show a similar result for the second half of the sequence.
@@ -9725,7 +9724,7 @@


an equal probability of moving to any neighboring square. What if those assumptions are wrong? Suppose that the initial location is actually chosen uniformly from the northwest quadrant of the room and the action -actually tends to move southeast\[hmm-robot-southeast-page\]. Keeping +actually tends to move southeast. Keeping the HMM model fixed, explore the effect on localization and path accuracy as the southeasterly tendency increases, for different values of $\epsilon$. @@ -9858,24 +9857,24 @@


a series of distinct maneuvers that the missile may attempt to track. A Bayesian network representation of such a switching Kalman filter model is shown in -Figure switching-kf-figure. +Figure switching-kf-figure.

1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate - $${\textbf{P}}(\textbf{X}_0)$$ is a multivariate + ${\textbf{P}}(\textbf{X}_0)$ is a multivariate Gaussian distribution. Show that the prediction - $${\textbf{P}}(\textbf{X}_1)$$ is a mixture of + ${\textbf{P}}(\textbf{X}_1)$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such - that the weights sum to 1. + that the weights sum to 1.

2. Show that if the current continuous state estimate - $${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$$ is a mixture of $m$ Gaussians, + ${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate - $${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$$ will be a mixture of - $km$ Gaussians. + ${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$ will be a mixture of + $km$ Gaussians.

3. What aspect of the temporal process do the weights in the Gaussian - mixture represent? + mixture represent?

The results in (a) and (b) show that the representation of the posterior grows without limit even for switching Kalman filters, which are among @@ -10011,22 +10010,16 @@


For the DBN specified in Exercise sleep1-exercise and for the evidence values
-$$ -\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class -$$ +$\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class$
perform the following computations:
-1. State estimation: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:t})$$ for each +1. State estimation: Compute $P({EnoughSleep}_t | \textbf{e}_{1:t})$ for each of $t = 1,2,3$.
-2. Smoothing: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:3})$$ for each of +2. Smoothing: Compute $P({EnoughSleep}_t | \textbf{e}_{1:3})$ for each of $t = 1,2,3$.
3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.
@@ -10497,16 +10490,16 @@


same units as $x$) becomes larger, the individual becomes less risk-averse.
-1. Assume Mary has an exponential utility function with $$R = \$500$$. - Mary is given the choice between receiving $$\$500$$ with certainty +1. Assume Mary has an exponential utility function with $R = \$500$. + Mary is given the choice between receiving $\$500$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% - probability of winning $$\$500$$ and a 50% probability of winning + probability of winning $\$500$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to @@ -10533,13 +10526,13 @@


risk-averse.
1. Assume Mary has an exponential utility function with $R = \$400$. - Mary is given the choice between receiving $$\$400$$ with certainty + Mary is given the choice between receiving $\$400$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning \$500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an @@ -10560,9 +10553,9 @@


Alex is given the choice between two games. In Game 1, a fair coin is -flipped and if it comes up heads, Alex receives $$\$100$$. If the coin comes +flipped and if it comes up heads, Alex receives $\$100$. If the coin comes up tails, Alex receives nothing. In Game 2, a fair coin is flipped -twice. Each time the coin comes up heads, Alex receives $$\$50$$, and Alex +twice. Each time the coin comes up heads, Alex receives $\$50$, and Alex receives nothing for each coin flip that comes up tails. Assuming that Alex has a monotonically increasing utility function for money in the range \[\$0, \$100\], show mathematically that if Alex prefers Game 2 to @@ -10707,10 +10700,10 @@


is time to carry out at most one test, and that $t_1$ is the test of $c_1$ and costs \$50.
-A car can be in good shape (quality $$q^+$$) or bad shape (quality $q^-$), +A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$), and the tests might help indicate what shape the car is in. Car $c_1$ -costs \$1,500, and its market value is $$\$2,000$$ if it is in good shape; if -not, $$\$700$$ in repairs will be needed to make it in good shape. The buyer’s +costs \$1,500, and its market value is $\$2,000$ if it is in good shape; if +not, $\$700$ in repairs will be needed to make it in good shape. The buyer’s estimate is that $c_1$ has a 70% chance of being in good shape.
1. Draw the decision network that represents this problem.
@@ -10721,9 +10714,9 @@


fail the test given that the car is in good or bad shape. We have the following information:
- $$P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$$
+ $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$
- $$P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$$
+ $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$
Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.
@@ -11254,11 +11247,17 @@


The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game between politicians and the Federal Reserve.
-| | Fed: contract | Fed: do nothing | Fed: expand | -| --- | --- | --- | --- | -| **Pol: contract** | $F=7, P=1$ | $F=9,P=4$ | $F=6,P=6$ | -| **Pol: do nothing** | $F=8, P=2$ | $F=5,P=5$ | $F=4,P=9$ | -| **Pol: expand** | $F=3, P=3$ | $F=2,P=7$ | $F=1,P=8$ | +$$ +\begin{array} + {|r|r|}\hline & Fed: contract & Fed: do nothing & Fed: expand \\ + \hline + Pol: contract & F=7, P=1 & F=9, P=4 & F=6, P=6 \\ + Pol: do nothing & F=8, P=2 & F=5, P=5 & F=4, P=9 \\ + Pol: expand & F=3, P=3 & F=2, P=7 & F=1, P=8\\ + \hline +\end{array} +$$ +
Politicians can expand or contract fiscal policy, while the Fed can expand or contract monetary policy. (And of course either side can @@ -11470,7 +11469,7 @@


-\[nonnegative-gain-exercise\]Suppose that an attribute splits the set of +Suppose that an attribute splits the set of examples $E$ into subsets $E_k$ and that each subset has $p_k$ positive examples and $n_k$ negative examples. Show that the attribute has strictly positive information gain unless the ratio @@ -11491,15 +11490,17 @@


Consider the following data set comprised of three binary input attributes ($A_1, A_2$, and $A_3$) and one binary output:
-| $\quad \textbf{Example}$ | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad Output\space y$ | -| --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 0 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | -| $\textbf{x}_4$ | 1 | 1 | 1 | 1 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | - - +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & Output\space y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 \\ + \textbf{x}_2 & 1 & 0 & 1 & 0 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 \\ + \textbf{x}_4 & 1 & 1 & 1 & 1 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 \\ + \hline +\end{array} +$$ Use the algorithm in Figure DTL-algorithm (page DTL-algorithm) to learn a decision tree for these data. Show the computations made to determine the attribute to split at each node. @@ -11693,17 +11694,20 @@


classify the same number of examples, then break the tie using attributes with lower index numbers (e.g., select $A_1$ over $A_2$).
- -| | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad A_y\quad$ | $\quad y\quad$ | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | 1 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 1 | 1 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 1 | 0 | 0 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | 1 | -| $\textbf{x}_6$ | 0 | 1 | 0 | 1 | 0 | -| $\textbf{x}_7$ | 0 | 0 | 1 | 1 | 1 | -| $\textbf{x}_8$ | 0 | 0 | 1 | 0 | 0 | +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 & 1 \\ + \textbf{x}_2 & 1 & 0 & 1 & 1 & 1 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 & 1 \\ + \textbf{x}_4 & 0 & 1 & 1 & 0 & 0 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 & 1 \\ + \textbf{x}_6 & 0 & 1 & 0 & 1 & 0 \\ + \textbf{x}_7 & 0 & 0 & 1 & 1 & 1 \\ + \textbf{x}_8 & 0 & 0 & 1 & 0 & 0 \\ + \hline +\end{array} +$$

@@ -11796,7 +11800,7 @@


-Figure <ahref=""#">kernel-machine-figure</a> +Figure kernel-machine-figure showed how a circle at the origin can be linearly separated by mapping from the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$. But what if the circle is not located at the origin? What if it is an @@ -11919,15 +11923,23 @@


Consider the following set of examples, each with six inputs and one target output:
-| | | | | | | | | | | | | | | | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | -| $\textbf{x}_3$ | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | -| $\textbf{x}_5$ | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | -| $\textbf{x}_6$ | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | -| $\textbf{T}$ | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | + + +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & A_5 & A_6 & A_7 & A_8 & A_9 & A_{10} & A_{11} & A_{12} & A_{13} & A_{14} \\ + \hline + \textbf{x}_1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \textbf{x}_2 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 \\ + \textbf{x}_3 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ + \textbf{x}_4 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\ + \textbf{x}_5 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ + \textbf{x}_6 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ + \textbf{T} & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \hline +\end{array} +$$ + 1. Run the perceptron learning rule on these data and show the diff --git a/markdown/10-Classical-Planning/exercises/ex_14/question.md b/markdown/10-Classical-Planning/exercises/ex_14/question.md index 7c1cad8e4f..0dbdbbadc0 100644 --- a/markdown/10-Classical-Planning/exercises/ex_14/question.md +++ b/markdown/10-Classical-Planning/exercises/ex_14/question.md @@ -1,7 +1,6 @@ -Examine the definition of **bidirectional -search** in Chapter search-chapter.
+Examine the definition of bidirectional search in Chapter search-chapter.
1. Would bidirectional state-space search be a good idea for planning?
diff --git a/markdown/12-Knowledge-Representation/exercises/ex_25/question.md b/markdown/12-Knowledge-Representation/exercises/ex_25/question.md index 726d7a9f5c..ce96be4e07 100644 --- a/markdown/12-Knowledge-Representation/exercises/ex_25/question.md +++ b/markdown/12-Knowledge-Representation/exercises/ex_25/question.md @@ -2,5 +2,5 @@ Translate the following description logic expression (from page description-logic-ex) into first-order logic, and comment on the result:
$$ -And(Man, AtLeast(3,Son), AtMost(2,Daughter),
All(Son,And(Unemployed,Married, All(Spouse,Doctor ))),
All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) +And(Man, AtLeast(3,Son), AtMost(2,Daughter), \\All(Son,And(Unemployed,Married, All(Spouse,Doctor ))), \\All(Daughter,And(Professor, Fills(Department ,Physics,Math)))) $$ diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_13/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_13/question.md index a9a527564a..e271197dc8 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_13/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_13/question.md @@ -8,4 +8,4 @@ receiver if at most one bit in the entire message (including the parity bit) has been corrupted. Suppose we want to ensure that the correct message is received with probability at least $1-\delta$. What is the maximum feasible value of $n$? Calculate this value for the case -$\epsilon{{\,=\,}}0.001$, $\delta{{\,=\,}}0.01$. +$\epsilon = 0.001$, $\delta = 0.01$. diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_21/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_21/question.md index c60d483638..7300c9a30e 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_21/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_21/question.md @@ -1,6 +1,6 @@ Show that the statement of conditional independence -$${\textbf{P}}(X,Y {{\,|\,}}Z) = {\textbf{P}}(X{{\,|\,}}Z) {\textbf{P}}(Y{{\,|\,}}Z)$$ +$${\textbf{P}}(X,Y | Z) = {\textbf{P}}(X | Z) {\textbf{P}}(Y | Z)$$ is equivalent to each of the statements -$${\textbf{P}}(X{{\,|\,}}Y,Z) = {\textbf{P}}(X{{\,|\,}}Z) \quad\mbox{and}\quad {\textbf{P}}(Y{{\,|\,}}X,Z) = {\textbf{P}}(Y{{\,|\,}}Z)\ .$$ +$${\textbf{P}}(X | Y,Z) = {\textbf{P}}(X | Z) \quad\mbox{and}\quad {\textbf{P}}(Y | X,Z) = {\textbf{P}}(Y | Z)\ .$$ diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_4/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_4/question.md index 31a84983df..f9f7e5748c 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_4/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_4/question.md @@ -1,13 +1,13 @@ Would it be rational for an agent to hold the three beliefs -$P(A) {{\,=\,}}{0.4}$, $P(B) {{\,=\,}}{0.3}$, and -$P(A \lor B) {{\,=\,}}{0.5}$? If so, what range of probabilities would +$P(A) = 0.4$, $P(B) = 0.3$, and +$P(A \lor B) = 0.5$? If so, what range of probabilities would be rational for the agent to hold for $A \land B$? Make up a table like the one in Figure de-finetti-table, and show how it supports your argument about rationality. Then draw another version of the table where $P(A \lor B) -{{\,=\,}}{0.7}$. Explain why it is rational to have this probability, += 0.7$. Explain why it is rational to have this probability, even though the table shows one case that is a loss and three that just break even. (Hint: what is Agent 1 committed to about the probability of each of the four cases, especially the case that is a diff --git a/markdown/14-Probabilistic-Reasoning/exercises/ex_16/question.md b/markdown/14-Probabilistic-Reasoning/exercises/ex_16/question.md index ad74d44252..8a79b06cde 100644 --- a/markdown/14-Probabilistic-Reasoning/exercises/ex_16/question.md +++ b/markdown/14-Probabilistic-Reasoning/exercises/ex_16/question.md @@ -7,9 +7,9 @@ Consider the Bayes net shown in Figure politics-figure?
5. Suppose we want to add the variable - $P{{\,=\,}}{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.
politics-figure diff --git a/markdown/14-Probabilistic-Reasoning/exercises/ex_17/question.md b/markdown/14-Probabilistic-Reasoning/exercises/ex_17/question.md index bfa28edf6b..929310e705 100644 --- a/markdown/14-Probabilistic-Reasoning/exercises/ex_17/question.md +++ b/markdown/14-Probabilistic-Reasoning/exercises/ex_17/question.md @@ -7,9 +7,9 @@ Consider the Bayes net shown in Figure politics-figure?
5. Suppose we want to add the variable - $P{{\,=\,}}{PresidentialPardon}$ to the network; draw the new + $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.
diff --git a/markdown/14-Probabilistic-Reasoning/exercises/ex_4/question.md b/markdown/14-Probabilistic-Reasoning/exercises/ex_4/question.md index 40b556d51b..93cc8a2b26 100644 --- a/markdown/14-Probabilistic-Reasoning/exercises/ex_4/question.md +++ b/markdown/14-Probabilistic-Reasoning/exercises/ex_4/question.md @@ -18,7 +18,7 @@ parents of $Y$, and all parents of $Y$ also become parents of $X$.
3. Let the parents of $X$ be $\textbf{U} \cup \textbf{V}$ and the parents of $Y$ be $\textbf{V} \cup \textbf{W}$, where $\textbf{U}$ and $\textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$\begin{aligned} - {\textbf{P}}(Y{{\,|\,}}\textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y{{\,|\,}}\textbf{V},\textbf{W}, x) {\textbf{P}}(x{{\,|\,}}\textbf{U}, \textbf{V}) \\ - {\textbf{P}}(X{{\,|\,}}\textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(Y{{\,|\,}}X, \textbf{V}, \textbf{W}) {\textbf{P}}(X{{\,|\,}}\textbf{U}, \textbf{V}) / {\textbf{P}}(Y{{\,|\,}}\textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ + {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W}) &=& \sum_x {\textbf{P}}(Y | \textbf{V},\textbf{W}, x) {\textbf{P}}(x | \textbf{U}, \textbf{V}) \\ + {\textbf{P}}(X | \textbf{U},\textbf{V},\textbf{W}, Y) &=& {\textbf{P}}(Y | X, \textbf{V}, \textbf{W}) {\textbf{P}}(X | \textbf{U}, \textbf{V}) / {\textbf{P}}(Y | \textbf{U},\textbf{V},\textbf{W})\ .\end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network. diff --git a/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_12/question.md b/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_12/question.md index dd8e38ca94..47b8f21c06 100644 --- a/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_12/question.md +++ b/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_12/question.md @@ -6,24 +6,24 @@ system whose behavior switches unpredictably among a set of $k$ distinct a series of distinct maneuvers that the missile may attempt to track. A Bayesian network representation of such a switching Kalman filter model is shown in -Figure switching-kf-figure. +Figure switching-kf-figure.

1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate - $${\textbf{P}}(\textbf{X}_0)$$ is a multivariate + ${\textbf{P}}(\textbf{X}_0)$ is a multivariate Gaussian distribution. Show that the prediction - $${\textbf{P}}(\textbf{X}_1)$$ is a mixture of + ${\textbf{P}}(\textbf{X}_1)$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such - that the weights sum to 1. + that the weights sum to 1.

2. Show that if the current continuous state estimate - $${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$$ is a mixture of $m$ Gaussians, + ${\textbf{P}}(\textbf{X}_t|\textbf{e}_{1:t})$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate - $${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$$ will be a mixture of - $km$ Gaussians. + ${\textbf{P}}(\textbf{X}_{t+1}|\textbf{e}_{1:t+1})$ will be a mixture of + $km$ Gaussians.

3. What aspect of the temporal process do the weights in the Gaussian - mixture represent? + mixture represent?

The results in (a) and (b) show that the representation of the posterior grows without limit even for switching Kalman filters, which are among diff --git a/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_17/question.md b/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_17/question.md index b9475cb885..3505c6a2c2 100644 --- a/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_17/question.md +++ b/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_17/question.md @@ -3,22 +3,16 @@ For the DBN specified in Exercise sleep1-exercise and for the evidence values
-$$ -\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class -$$ -$$ -\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class -$$ +$\textbf{e}_1 = not\space red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_2 = red\space eyes,\space not\space sleeping\space in\space class$
+$\textbf{e}_3 = red\space eyes,\space sleeping\space in\space class$
perform the following computations:
-1. State estimation: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:t})$$ for each +1. State estimation: Compute $P({EnoughSleep}_t | \textbf{e}_{1:t})$ for each of $t = 1,2,3$.
-2. Smoothing: Compute $$P({EnoughSleep}_t | \textbf{e}_{1:3})$$ for each of +2. Smoothing: Compute $P({EnoughSleep}_t | \textbf{e}_{1:3})$ for each of $t = 1,2,3$.
3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.
diff --git a/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_3/question.md b/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_3/question.md index 2308700c38..c5cd752183 100644 --- a/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_3/question.md +++ b/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_3/question.md @@ -3,16 +3,16 @@ This exercise develops a space-efficient variant of the forward–backward algorithm described in Figure forward-backward-algorithm (page forward-backward-algorithm). -We wish to compute $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$$ for -$$k=1,\ldots ,t$$. This will be done with a divide-and-conquer +We wish to compute $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t})$ for +$k=1,\ldots ,t$. This will be done with a divide-and-conquer approach.
1. Suppose, for simplicity, that $t$ is odd, and let the halfway point - be $h=(t+1)/2$. Show that $$\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $$ + be $h=(t+1)/2$. Show that $\textbf{P} (\textbf{X}_k|\textbf{e}_{1:t}) $ can be computed for $k=1,\ldots ,h$ given just the initial forward message - $$\textbf{f}_{1:0}$$, the backward message $$\textbf{b}_{h+1:t}$$, and the evidence - $$\textbf{e}_{1:h}$$.
+ $\textbf{f}_{1:0}$, the backward message $\textbf{b}_{h+1:t}$, and the evidence + $\textbf{e}_{1:h}$.
2. Show a similar result for the second half of the sequence.
diff --git a/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_7/question.md b/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_7/question.md index f21e391735..ea3e5d151f 100644 --- a/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_7/question.md +++ b/markdown/15-Probabilistic-Reasoning-Over-Time/exercises/ex_7/question.md @@ -5,7 +5,7 @@ distribution over locations is uniform and the transition model assumes an equal probability of moving to any neighboring square. What if those assumptions are wrong? Suppose that the initial location is actually chosen uniformly from the northwest quadrant of the room and the action -actually tends to move southeast\[hmm-robot-southeast-page\]. Keeping +actually tends to move southeast. Keeping the HMM model fixed, explore the effect on localization and path accuracy as the southeasterly tendency increases, for different values of $\epsilon$. diff --git a/markdown/16-Making-Simple-Decisions/exercises/ex_14/question.md b/markdown/16-Making-Simple-Decisions/exercises/ex_14/question.md index 963aad0725..37bd606b49 100644 --- a/markdown/16-Making-Simple-Decisions/exercises/ex_14/question.md +++ b/markdown/16-Making-Simple-Decisions/exercises/ex_14/question.md @@ -8,16 +8,16 @@ value (EMV) versus some certain payoff. As $R$ (which is measured in the same units as $x$) becomes larger, the individual becomes less risk-averse.
-1. Assume Mary has an exponential utility function with $$R = \$500$$. - Mary is given the choice between receiving $$\$500$$ with certainty +1. Assume Mary has an exponential utility function with $R = \$500$. + Mary is given the choice between receiving $\$500$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% - probability of winning $$\$500$$ and a 50% probability of winning + probability of winning $\$500$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to diff --git a/markdown/16-Making-Simple-Decisions/exercises/ex_15/question.md b/markdown/16-Making-Simple-Decisions/exercises/ex_15/question.md index 1f3a249fed..027610f8a6 100644 --- a/markdown/16-Making-Simple-Decisions/exercises/ex_15/question.md +++ b/markdown/16-Making-Simple-Decisions/exercises/ex_15/question.md @@ -9,13 +9,13 @@ same units as $x$) becomes larger, the individual becomes less risk-averse.
1. Assume Mary has an exponential utility function with $R = \$400$. - Mary is given the choice between receiving $$\$400$$ with certainty + Mary is given the choice between receiving $\$400$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning \$5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
-2. Consider the choice between receiving $$\$100$$ with certainty +2. Consider the choice between receiving $\$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning \$500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an diff --git a/markdown/16-Making-Simple-Decisions/exercises/ex_16/question.md b/markdown/16-Making-Simple-Decisions/exercises/ex_16/question.md index 429f76c1c5..a5b3e29879 100644 --- a/markdown/16-Making-Simple-Decisions/exercises/ex_16/question.md +++ b/markdown/16-Making-Simple-Decisions/exercises/ex_16/question.md @@ -1,9 +1,9 @@ Alex is given the choice between two games. In Game 1, a fair coin is -flipped and if it comes up heads, Alex receives $$\$100$$. If the coin comes +flipped and if it comes up heads, Alex receives $\$100$. If the coin comes up tails, Alex receives nothing. In Game 2, a fair coin is flipped -twice. Each time the coin comes up heads, Alex receives $$\$50$$, and Alex +twice. Each time the coin comes up heads, Alex receives $\$50$, and Alex receives nothing for each coin flip that comes up tails. Assuming that Alex has a monotonically increasing utility function for money in the range \[\$0, \$100\], show mathematically that if Alex prefers Game 2 to diff --git a/markdown/16-Making-Simple-Decisions/exercises/ex_22/question.md b/markdown/16-Making-Simple-Decisions/exercises/ex_22/question.md index 7d87d67331..b6213f9610 100644 --- a/markdown/16-Making-Simple-Decisions/exercises/ex_22/question.md +++ b/markdown/16-Making-Simple-Decisions/exercises/ex_22/question.md @@ -8,10 +8,10 @@ assume that the buyer is deciding whether to buy car $c_1$, that there is time to carry out at most one test, and that $t_1$ is the test of $c_1$ and costs \$50.
-A car can be in good shape (quality $$q^+$$) or bad shape (quality $q^-$), +A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$), and the tests might help indicate what shape the car is in. Car $c_1$ -costs \$1,500, and its market value is $$\$2,000$$ if it is in good shape; if -not, $$\$700$$ in repairs will be needed to make it in good shape. The buyer’s +costs \$1,500, and its market value is $\$2,000$ if it is in good shape; if +not, $\$700$ in repairs will be needed to make it in good shape. The buyer’s estimate is that $c_1$ has a 70% chance of being in good shape.
1. Draw the decision network that represents this problem.
@@ -22,9 +22,9 @@ estimate is that $c_1$ has a 70% chance of being in good shape.
fail the test given that the car is in good or bad shape. We have the following information:
- $$P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$$
+ $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$
- $$P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$$
+ $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$
Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.
diff --git a/markdown/17-Making-Complex-Decision/exercises/ex_22/question.md b/markdown/17-Making-Complex-Decision/exercises/ex_22/question.md index 387c1e128c..77bdb05664 100644 --- a/markdown/17-Making-Complex-Decision/exercises/ex_22/question.md +++ b/markdown/17-Making-Complex-Decision/exercises/ex_22/question.md @@ -3,11 +3,17 @@ The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game between politicians and the Federal Reserve.
-| | Fed: contract | Fed: do nothing | Fed: expand | -| --- | --- | --- | --- | -| **Pol: contract** | $F=7, P=1$ | $F=9,P=4$ | $F=6,P=6$ | -| **Pol: do nothing** | $F=8, P=2$ | $F=5,P=5$ | $F=4,P=9$ | -| **Pol: expand** | $F=3, P=3$ | $F=2,P=7$ | $F=1,P=8$ | +$$ +\begin{array} + {|r|r|}\hline & Fed: contract & Fed: do nothing & Fed: expand \\ + \hline + Pol: contract & F=7, P=1 & F=9, P=4 & F=6, P=6 \\ + Pol: do nothing & F=8, P=2 & F=5, P=5 & F=4, P=9 \\ + Pol: expand & F=3, P=3 & F=2, P=7 & F=1, P=8\\ + \hline +\end{array} +$$ +
Politicians can expand or contract fiscal policy, while the Fed can expand or contract monetary policy. (And of course either side can diff --git a/markdown/18-Learning-From-Examples/exercises/ex_16/question.md b/markdown/18-Learning-From-Examples/exercises/ex_16/question.md index 35aa26ec95..63d2b9f6dd 100644 --- a/markdown/18-Learning-From-Examples/exercises/ex_16/question.md +++ b/markdown/18-Learning-From-Examples/exercises/ex_16/question.md @@ -8,14 +8,17 @@ correctly. If multiple tests have the same number of attributes and classify the same number of examples, then break the tie using attributes with lower index numbers (e.g., select $A_1$ over $A_2$).
- -| | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad A_y\quad$ | $\quad y\quad$ | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | 1 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 1 | 1 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 1 | 0 | 0 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | 1 | -| $\textbf{x}_6$ | 0 | 1 | 0 | 1 | 0 | -| $\textbf{x}_7$ | 0 | 0 | 1 | 1 | 1 | -| $\textbf{x}_8$ | 0 | 0 | 1 | 0 | 0 | +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 & 1 \\ + \textbf{x}_2 & 1 & 0 & 1 & 1 & 1 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 & 1 \\ + \textbf{x}_4 & 0 & 1 & 1 & 0 & 0 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 & 1 \\ + \textbf{x}_6 & 0 & 1 & 0 & 1 & 0 \\ + \textbf{x}_7 & 0 & 0 & 1 & 1 & 1 \\ + \textbf{x}_8 & 0 & 0 & 1 & 0 & 0 \\ + \hline +\end{array} +$$ diff --git a/markdown/18-Learning-From-Examples/exercises/ex_21/question.md b/markdown/18-Learning-From-Examples/exercises/ex_21/question.md index f9ce3466b3..52983445f9 100644 --- a/markdown/18-Learning-From-Examples/exercises/ex_21/question.md +++ b/markdown/18-Learning-From-Examples/exercises/ex_21/question.md @@ -1,6 +1,6 @@ -Figure kernel-machine-figure +Figure kernel-machine-figure showed how a circle at the origin can be linearly separated by mapping from the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$. But what if the circle is not located at the origin? What if it is an diff --git a/markdown/18-Learning-From-Examples/exercises/ex_27/question.md b/markdown/18-Learning-From-Examples/exercises/ex_27/question.md index b661f00cbc..4f1152850d 100644 --- a/markdown/18-Learning-From-Examples/exercises/ex_27/question.md +++ b/markdown/18-Learning-From-Examples/exercises/ex_27/question.md @@ -3,15 +3,23 @@ Consider the following set of examples, each with six inputs and one target output:
-| | | | | | | | | | | | | | | | -| --- | --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | -| $\textbf{x}_3$ | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | -| $\textbf{x}_4$ | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | -| $\textbf{x}_5$ | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | -| $\textbf{x}_6$ | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | -| $\textbf{T}$ | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | + + +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & A_4 & A_5 & A_6 & A_7 & A_8 & A_9 & A_{10} & A_{11} & A_{12} & A_{13} & A_{14} \\ + \hline + \textbf{x}_1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \textbf{x}_2 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 \\ + \textbf{x}_3 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ + \textbf{x}_4 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\ + \textbf{x}_5 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ + \textbf{x}_6 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ + \textbf{T} & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + \hline +\end{array} +$$ + 1. Run the perceptron learning rule on these data and show the diff --git a/markdown/18-Learning-From-Examples/exercises/ex_7/question.md b/markdown/18-Learning-From-Examples/exercises/ex_7/question.md index d1ccfb1a88..f202272a06 100644 --- a/markdown/18-Learning-From-Examples/exercises/ex_7/question.md +++ b/markdown/18-Learning-From-Examples/exercises/ex_7/question.md @@ -1,6 +1,6 @@ -\[nonnegative-gain-exercise\]Suppose that an attribute splits the set of +Suppose that an attribute splits the set of examples $E$ into subsets $E_k$ and that each subset has $p_k$ positive examples and $n_k$ negative examples. Show that the attribute has strictly positive information gain unless the ratio diff --git a/markdown/18-Learning-From-Examples/exercises/ex_8/question.md b/markdown/18-Learning-From-Examples/exercises/ex_8/question.md index 6323f57807..45dbba943c 100644 --- a/markdown/18-Learning-From-Examples/exercises/ex_8/question.md +++ b/markdown/18-Learning-From-Examples/exercises/ex_8/question.md @@ -3,15 +3,17 @@ Consider the following data set comprised of three binary input attributes ($A_1, A_2$, and $A_3$) and one binary output:
-| $\quad \textbf{Example}$ | $\quad A_1\quad$ | $\quad A_2\quad$ | $\quad A_3\quad$ | $\quad Output\space y$ | -| --- | --- | --- | --- | --- | -| $\textbf{x}_1$ | 1 | 0 | 0 | 0 | -| $\textbf{x}_2$ | 1 | 0 | 1 | 0 | -| $\textbf{x}_3$ | 0 | 1 | 0 | 0 | -| $\textbf{x}_4$ | 1 | 1 | 1 | 1 | -| $\textbf{x}_5$ | 1 | 1 | 0 | 1 | - - +$$ +\begin{array} + {|r|r|}\hline \textbf{Example} & A_1 & A_2 & A_3 & Output\space y \\ + \hline \textbf{x}_1 & 1 & 0 & 0 & 0 \\ + \textbf{x}_2 & 1 & 0 & 1 & 0 \\ + \textbf{x}_3 & 0 & 1 & 0 & 0 \\ + \textbf{x}_4 & 1 & 1 & 1 & 1 \\ + \textbf{x}_5 & 1 & 1 & 0 & 1 \\ + \hline +\end{array} +$$ Use the algorithm in Figure DTL-algorithm (page DTL-algorithm) to learn a decision tree for these data. Show the computations made to determine the attribute to split at each node. diff --git a/markdown/4-Beyond-Classical-Search/exercises/ex_12/question.md b/markdown/4-Beyond-Classical-Search/exercises/ex_12/question.md index de6337898d..650166a140 100644 --- a/markdown/4-Beyond-Classical-Search/exercises/ex_12/question.md +++ b/markdown/4-Beyond-Classical-Search/exercises/ex_12/question.md @@ -4,9 +4,9 @@ We can turn the navigation problem in Exercise path-planning-exercise into an environment as follows:
-- The percept will be a list of the positions, *relative to the - agent*, of the visible vertices. The percept does - *not* include the position of the robot! The robot must +- The percept will be a list of the positions, relative to the + agent, of the visible vertices. The percept does + not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”
@@ -15,7 +15,7 @@ follows:
otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment - teleports the agent to a *random location* (not inside + teleports the agent to a random location (not inside an obstacle).
- The performance measure charges the agent 1 point for each unit of diff --git a/markdown/4-Beyond-Classical-Search/exercises/ex_13/question.md b/markdown/4-Beyond-Classical-Search/exercises/ex_13/question.md index 1cd55f8a00..861069f949 100644 --- a/markdown/4-Beyond-Classical-Search/exercises/ex_13/question.md +++ b/markdown/4-Beyond-Classical-Search/exercises/ex_13/question.md @@ -4,8 +4,8 @@ Suppose that an agent is in a $3 \times 3$ maze environment like the one shown in Figure maze-3x3-figure. The agent knows that its initial location is (1,1), that the goal is at (3,3), and that the -actions *Up*, *Down*, *Left*, *Right* have their usual -effects unless blocked by a wall. The agent does *not* know +actions Up, Down, Left, Right have their usual +effects unless blocked by a wall. The agent does not know where the internal walls are. In any given state, the agent perceives the set of legal actions; it can also tell whether the state is one it has visited before.
@@ -20,7 +20,7 @@ has visited before.
3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?
-Notice that this contingency plan is a solution for *every -possible environment* fitting the given description. Therefore, +Notice that this contingency plan is a solution for every +possible environment fitting the given description. Therefore, interleaving of search and execution is not strictly necessary even in unknown environments. diff --git a/markdown/4-Beyond-Classical-Search/exercises/ex_5/question.md b/markdown/4-Beyond-Classical-Search/exercises/ex_5/question.md index 7682a672a6..796de94938 100644 --- a/markdown/4-Beyond-Classical-Search/exercises/ex_5/question.md +++ b/markdown/4-Beyond-Classical-Search/exercises/ex_5/question.md @@ -1,10 +1,10 @@ -The **And-Or-Graph-Search** algorithm in +The And-Or-Graph-Search algorithm in Figure and-or-graph-search-algorithm checks for repeated states only on the path from the root to the current state. Suppose that, in addition, the algorithm were to store -*every* visited state and check against that list. (See in +every visited state and check against that list. (See in Figure breadth-first-search-algorithm for an example.) Determine the information that should be stored and how the algorithm should use that information when a repeated state is found. diff --git a/markdown/4-Beyond-Classical-Search/exercises/ex_6/question.md b/markdown/4-Beyond-Classical-Search/exercises/ex_6/question.md index b5f1f2fad8..f84f568192 100644 --- a/markdown/4-Beyond-Classical-Search/exercises/ex_6/question.md +++ b/markdown/4-Beyond-Classical-Search/exercises/ex_6/question.md @@ -1,9 +1,9 @@ -Explain precisely how to modify the **And-Or-Graph-Search** algorithm to +Explain precisely how to modify the And-Or-Graph-Search algorithm to generate a cyclic plan if no acyclic plan exists. You will need to deal with three issues: labeling the plan steps so that a cyclic plan can -point back to an earlier part of the plan, modifying **Or-Search** so that it +point back to an earlier part of the plan, modifying Or-Search so that it continues to look for acyclic plans after finding a cyclic plan, and augmenting the plan representation to indicate whether a plan is cyclic. Show how your algorithm works on (a) the slippery vacuum world, and (b) diff --git a/markdown/4-Beyond-Classical-Search/exercises/ex_9/question.md b/markdown/4-Beyond-Classical-Search/exercises/ex_9/question.md index 53af6ded44..507ab4f07a 100644 --- a/markdown/4-Beyond-Classical-Search/exercises/ex_9/question.md +++ b/markdown/4-Beyond-Classical-Search/exercises/ex_9/question.md @@ -7,8 +7,8 @@ what happens when the assumption does not hold. Does the notion of optimality still make sense in this context, or does it require modification? Consider also various possible definitions of the “cost” of executing an action in a belief state; for example, we could use the -*minimum* of the physical costs; or the -*maximum*; or a cost *interval* with the lower +minimum of the physical costs; or the +maximum; or a cost interval with the lower bound being the minimum cost and the upper bound being the maximum; or just keep the set of all possible costs for that action. For each of these, explore whether A* (with modifications if necessary) can return diff --git a/markdown/6-Constraint-Satisfaction-Problems/exercises/ex_14/question.md b/markdown/6-Constraint-Satisfaction-Problems/exercises/ex_14/question.md index b8524e2a43..18efd18990 100644 --- a/markdown/6-Constraint-Satisfaction-Problems/exercises/ex_14/question.md +++ b/markdown/6-Constraint-Satisfaction-Problems/exercises/ex_14/question.md @@ -1,7 +1,7 @@ -AC-3 puts back on the queue *every* arc -($X_{k}, X_{i}$) whenever *any* value is deleted from the +AC-3 puts back on the queue every arc +($X_{k}, X_{i}$) whenever any value is deleted from the domain of $X_{i}$, even if each value of $X_{k}$ is consistent with several remaining values of $X_{i}$. Suppose that, for every arc ($X_{k}, X_{i}$), we keep track of the number of remaining values of diff --git a/markdown/8-First-Order-Logic/exercises/ex_1/question.md b/markdown/8-First-Order-Logic/exercises/ex_1/question.md index 720453f7d5..af9cbf49bf 100644 --- a/markdown/8-First-Order-Logic/exercises/ex_1/question.md +++ b/markdown/8-First-Order-Logic/exercises/ex_1/question.md @@ -9,14 +9,14 @@ about the country—it represents facts with a map language. The two-dimensional structure of the map corresponds to the two-dimensional surface of the area.
-1. Give five examples of *symbols* in the map language.
+1. Give five examples of symbols in the map language.
-2. An *explicit* sentence is a sentence that the creator +2. An explicit sentence is a sentence that the creator of the representation actually writes down. An - *implicit* sentence is a sentence that results from + implicit sentence is a sentence that results from explicit sentences because of properties of the analogical - representation. Give three examples each of *implicit* - and *explicit* sentences in the map language.
+ representation. Give three examples each of implicit + and explicit sentences in the map language.
3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.
diff --git a/markdown/9-Inference-In-First-Order-Logic/exercises/ex_14/question.md b/markdown/9-Inference-In-First-Order-Logic/exercises/ex_14/question.md index 1676b73d05..e213c2d19e 100644 --- a/markdown/9-Inference-In-First-Order-Logic/exercises/ex_14/question.md +++ b/markdown/9-Inference-In-First-Order-Logic/exercises/ex_14/question.md @@ -4,7 +4,7 @@ Suppose we put into a logical knowledge base a segment of the U.S. census data listing the age, city of residence, date of birth, and mother of every person, using social security numbers as identifying constants for each person. Thus, George’s age is given by -${Age}(\mbox{{443}}-{65}-{1282}}, {56})$. Which of the following +${Age}(443-65-1282, 56)$. Which of the following indexing schemes S1–S5 enable an efficient solution for which of the queries Q1–Q4 (assuming normal backward chaining)?