Notes for Friday, April 21
- We define a linear classifier as a function that takes points in a multi-dimensional space (think different variables) and maps those points to positive or negative. We'll do this by defining a vector in the same variable space. The "prediction" (positive or negative) will be the sign of the inner product of the data vector and the classifier vector. If the two vectors are pointing in the same direction, this will be positive. If they're pointing in opposite directions the inner product will be negative. If the inner product is zero, then the data vector lies exactly on the boundary.
- How do we find the best classifier vector? It should be pointing in the same direction as the positive data vectors, and away from the negative data vectors. The perceptron algorithm tries to build a good classifier as quickly as possible by updating the classifier vector after each new example, and ignoring the data vectors that are already being handled correctly.
- But how do we update the classifier vector for the vectors we do want to handle? In this work we'll show visually, using animation, how this works.