Reshaping AI Trading: Kernel Regression & Support Vector Machines

Computer Science Published: June 01, 2010
BACQUALDIA

In the dynamic realm of finance, artificial intelligence (AI) has long been a topic of interest. Yet, progress in AI trading stagnated in the mid to late 1990s due to the complexities and user resistance associated with popular tools. However, recent advancements are shifting focus towards kernel regression and support vector machines (SVM), two innovative techniques that are reshaping AI trading.

The Intersection of Neural Networks and Kernel Regression

Neural networks offer significant potential, but their results can be erratic due to issues with initialization and training exercises. This inconsistency has hindered widespread adoption of AI trading. Kernel regression—a supervised modeling method—addresses these concerns directly.

Unlike neural networks, which often demand repeated retraining to overcome initial condition problems, kernel regression starts with more stable conditions. This allows for the construction of a model without having to repeatedly train it using the same input/output data. Although kernel regression does face its own challenges—it loses robustness when presented with too many inputs—with thoughtful design and domain expertise, these hurdles can be overcome.

Support Vector Machines: A Compelling Alternative

Support Vector Machines (SVM) is a type of algorithm that constructs an n-dimensional space to categorize data into different classifications. It shares similarities with neural network models, utilizing a sigmoid kernel function equivalent to what can be achieved using a two-layer feed-forward neural network. By employing a kernel function, SVMs provide an alternative training method for polynomial, radial-basis functions, and multi-layer perception classifiers.

This approach involves solving a quadratic programming problem with linear constraints instead of solving a non-convex, unconstrained minimization problem, as in standard neural network training. This makes SVMs more efficient and easier to optimize. Essentially, SVMs are designed to find the optimal hyperplane that separates clusters of vectors in such a way that cases with one category of the target variable are on one side of the plane, while cases with the other category are on the opposite side.

Illustrating Support Vector Machines

To gain a clearer understanding of SVMs, let's consider a two-dimensional example. Suppose we have a dataset with a target variable containing two categories and two predictor variables with continuous values. If we represent the data points using one predictor on the X-axis and the other on the Y-axis, we might visualize it as shown below.

![Cuts two ways](https://i.imgur.com/jYZ4zTc.png)

In this idealized example, the cases with one category of the target variable are on one side of the plane, while cases with the other category are on the opposite side, demonstrating the effectiveness of SVMs in separating data.