You'd like to use polynomial regression to predict a student's final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the form hθ(x)=θ0+θ1x1+θ2x2, where x1 is the midterm score and x2 is (midterm score)2. Further, you plan to use both feature scaling (dividing by the "max-min", or range, of a feature) and mean normalization.
What is the normalized feature x2(2)? (Hint: midterm = 72, final = 74 is training example 2.) Please round off your answer to two decimal places and enter in the text box below.
You run gradient descent for 15 iterations with α=0.3 and compute J(θ) after each iteration. You find that the value of J(θ) decreases quickly then levels off. Based on this, which of the following conclusions seems most plausible?
α=0.3 is an effective choice of learning rate.
Rather than use the current value of α, it'd be more promising to try a smaller value of α (say α=0.1).
Rather than use the current value of α, it'd be more promising to try a larger value of α (say α=1.0).
Suppose you have m=23 training examples with n=5 features (excluding the additional all-ones feature for the intercept term, which you should add). The normal equation is θ=(XTX)−1XTy. For the given values of m and n, what are the dimensions of θ, X, and y in this equation?
X is 23×5, y is 23×1, θ is 5×5
X is 23×6, y is 23×1, θ is 6×1
X is 23×5, y is 23×1, θ is 5×1
X is 23×6, y is 23×6, θ is 6×6 解：首先可确定y是一列，排除D，x是n+1列，θ是n+1行，故选B。
Suppose you have a dataset with m=1000000 examples and n=200000 features for each example. You want to use multivariate linear regression to fit the parameters θ to our data. Should you prefer gradient descent or the normal equation?
Gradient descent, since (XTX)−1 will be very slow to compute in the normal equation.
The normal equation, since gradient descent might be unable to find the optimal θ.
Gradient descent, since it will always converge to the optimal θ.
The normal equation, since it provides an efficient way to directly find the solution.
Which of the following are reasons for using feature scaling?
It speeds up gradient descent by making it require fewer iterations to get to a good solution.
It is necessary to prevent the normal equation from getting stuck in local optima.
It speeds up gradient descent by making each iteration of gradient descent less expensive to compute.
It prevents the matrix XTX (used in the normal equation) from being non-invertable (singular/degenerate).