Avoiding Hugging Problems in Neural Net Trading
Have You Hugged Your Neural Network Today?
Have you ever stopped to consider the inner workings of your neural network-based trading systems? Despite their power and complexity, they're not immune to hugging problems – that is, they can struggle when faced with too many inputs. It's like trying to hold onto a wriggling puppy; the more variables you introduce, the less effective your grip becomes.
The Complexity Conundrum
Imagine this: you're building a neural network model and decide to include ADX (Average Directional Movement) values from various timeframes – current, two bars ago, five bars ago, 10 bars ago, and 20 bars ago. That's five inputs already! Now, if you want to predict 10 days out using seven variables, you're looking at over 50 inputs. Sound familiar? Classic old school development methods often follow this paradigm, but it's like trying to navigate a maze blindfolded – complex and fraught with uncertainty.
Enter Kernel Regression: The Support Vector Machine
So, what's the solution? One approach is kernel regression, also known as support vector machine (SVM) algorithm. Unlike neural networks that start with random initial conditions, kernel regression doesn't have this issue. It constructs an n-dimensional space that separates data into different classifications using a kernel function. This type of algorithm is closely related to neural network models and can act as an alternative training method for polynomial, radial-basis functions, and multi-layer perception classifiers.
Building Smarter Models: The Expert Component Approach
But what if we want to use deterministic methods like kernel regression while maintaining robust, understandable models? Here's where the expert component approach comes in. We build smaller models composed of logical components that are predictive on their own. These components can vary and look at the problem differently, increasing the robustness of our solution.
We can build these expert components manually or evolve them using genetic algorithms. They don't always need to give buy/sell outputs; they could be forecasts or advance composite indicators telling us about trend strength and market modes. The key is allowing neural networks or kernel regression to combine these components into an expert component.
Portfolio Implications: C, TIP, QUAL, EFA, BAC
Let's apply this approach to our portfolio. Consider using the expert component method for stocks like C (Coca-Cola) and TIP (iShares TIPS Bond Fund). For C, we might have components analyzing price action, volume, and insider trading activity. For TIP, components could focus on interest rate trends, inflation data, and bond market sentiment.
For ETFs like QUAL (Global X Robotics & Artificial Intelligence ETF) or EFA (iShares MSCI EAFE ETF), expert components might involve analyzing tech sector fundamentals for QUAL or international economic indicators for EFA. And for banks like BAC (Bank of America), components could track lending activity, credit quality, and regulatory changes.
Risks and Opportunities
While this approach offers potential benefits, it's not without risks. Building expert components requires domain expertise, and there's always the chance that a component might stop working effectively over time. Moreover, while combining multiple models increases robustness, it also adds complexity to interpretation.
However, the opportunities are significant. By creating more understandable, robust models, we can make better-informed trading decisions. This could lead to improved performance for our portfolio, especially in volatile markets where complex models often struggle.
So, What's Next?
Ready to give your neural network a hug? Start by breaking down your models into smaller, logical components. Test different combinations and build multiple models. Then combine these expert components into a composite model. Remember, the goal is to create models that are robust, understandable, and capable of handling the complexities of today's markets.