We’ve released EWAVES 2.0b (beta), a fromthegroundup rewrite of EWAVES 1.X. If you haven’t read about it yet, go here (http://www.ewaves.com/monthlypublication/1701) for the previous issue of EWAVES Flash.
Our first EWAVES 2.0b -powered recommendation was on January 23rd, three days after launch. The Stocks Flash (http://www.ewaves.com/flashservices) service included (as promised) a machinegenerated chart to demonstrate the setup: It anticipated an imminent decline in share price of the secondlargest telecom company in the world. There commendation was still active when Robert Folsom reviewed the forecast in his February 2nd Club EWI (https://www.youtube.com/watch?v=k2vzm166gZA)Chart of the Day: Artificial Intelligence in Action. Click below to play the video.
A few weeks ago, an EWAVES team members called me from offsite to discuss a technical problem. At the end of our conversation he said, “Two years ago, if you had told me I’d be this intimately involved in a complete reboot of EWAVES—and that we would actually get this far—I’d have thought you were insane.”
He has a point. While prototyping ideas for 2.0, there was significant contention about the cost to rebuild.
Instead, maybe we should continue to work on the old version? For a while we did supply numerous patches to the old program. The difficulties with patching, however, made it rapidly apparent that we needed to architect a new and modern EWAVES framework, and in turn work within that environment to improve EWAVES moving forward.
The 2.0 project was a beast. The first step was to review the 1.0 codebase, and we left no stone unturned.
Today we are familiar with hundreds of thousands of lines of code written by the original Lockheed engineers. We ported, retested and improved much of that code, though after careful review we rejected many components. We completely rewrote and redesigned the analysis engine, strategy module and graphical interface.
Today, the structural changes we have made (and are continuing to make) are so deep that the necessity of our reboot is uncontested. Versus the old software, the new EWAVES is:
- Highly malleable and thus conducive to future improvement
- More accurate and correct due to an extensive, integrated test infrastructure and peer review processes.
- Faster by approximately two orders of magnitude—i.e. a 100 times speed improvement (read the Must Go Faster (http://www.ewaves.com/1404ewf) issue of EWAVES Flash ). More speed means both more detailed analysis and faster research and development.
- Selfdocumenting. We can convert tests into charts (and vice versa) on the fly to rapidly orient analysts. Able to dynamically swap out components during runtime, so analysts can conduct specialized research.
- More debuggable, via new facilities that allow us to investigate why the program produces the analysis that it does.
- Tested by an Idealized Elliott wave generator to verify EWAVES outside the scope of real market data, helping usavoid curve fitting (we explain curve fitting in Lies, Damn Lies and Backtests (http://www.ewaves.com/1506ewf).
- Built using a clientserver architecture. With a single visual interface, we can run and control an unlimited number of EWAVES servers. Our army of “virtual analysts” we can coverany number of markets.
The transition to EWAVES 2.0b was a major event. Yet the 2.X framework is even more important, because it enables our team to conduct faster iterative refinements. The process of testing, improving and generalizing (to avoid curvefitting) will eventually lead us to our primary goal: to exit beta. The exit criteria is quite high. We will do so once we see that EWAVES’ analysis quality matches or exceeds that of humans experts on every dataset we test. We anticipate this will coincide with the release of EWAVES 2.1.
Improvements do come easier these days because the 2.X codebase is highly malleable. The software is large, yet is elegantly divided into components that can be swapped out to test improvements to the program. EWAVES 2.0b is so easily alterable that one of the folks at Elliott Wave International (EWI) characterizes what we have as an “Elliott Wave Laboratory” which allows EWP to be formally codified and researched for the very first time.
The Evolution Revolution
Projects often prosper more by way of guided evolution, instead of explicit planning. In industry this design approach is known as iterative development. Simply put, think big but test small. Rather than allocate a huge amount of time, labor and expense up front on an untested concept, it’s better to invest in a microcosm of the concept. If the experiment shows promise, larger investment follows.
Tom Wujec’s TED talk “Build a Tower, Build a Team” explains the power of iterative design at work in “The Marshmallow Challenge.” This challenge asks small teams to use dry spaghetti, tape, and yarn to see who can build the tallest tower, complete with a marshmallow on top.
The age and backgrounds of winning teams made for counterintuitive outcomes:
Kindergartners regularly beat business school graduates, in part because the suits spent time jockeying for position within the team, they planned and executed what they thought would be the perfect structure, and then in the last few minutes they added the marshmallow—which often caused their structure to buckle.
By contrast, the kids started with the marshmallow right away, and tried multiple different ways to get it up higher. That’s the secret: start with the marshmallow, which is heavier than most people assume, and tweak as you go.
The business students relied not on iterative design, but instead on a nearopposite approach known as “waterfall development,” where the solution is planned in full before any building takes place. When a problem domain is well understood, explicit design is superior to iteration. Experienced engineers, for example, would likely use waterfall development to win the marshmallow challenge, because they are experts at making detailed building plans with a high degree of certainty.
But waterfall development falls on its face when the problem domain is poorly understood. Business students struggle to define a plan under uncertainty. In contrast, kindergarteners barrel right away into the unknown with tests and experiments. They fearlessly learn as they go. What’s even more interesting is that with each successive prototype, the kids benefit from an exponential increase in the number of paths the project may take, which in turn yields the most interesting and varied designs. The iterative approach is so powerful that the kids do well despite having zero knowledge of the problem domain.
The design methodology strongly affects the risk profile of the effort. As with many realworld projects, effort and investment don’t count if the marshmallow does not stay on top.
The iterative approach begins with a complete solution—a small tower with a marshmallow on top. While not ideal, it greatly reduces the risk of failure. In contrast, many adults who attempt the challenge end up with no tower at all, since they follow a plan with no prototyping along the way. They see mistakes and design flaws too late in the process, which pushes all the project’s risk into the tailend.
A Matter of Degree
Iteration is the best (perhaps the only) way to solve illdefined and otherwise intractable problems. Yet an exclusively iterative approach is a misnomer. The problem with the word “iteration” is that—to use an Elliott wave analogy—it is degree dependent. For example, a business can build a new website using a waterfall technique, yet keep its older website running. So at a high level, iteration is occurring: an imperfect solution exists. Testing, learning and feedback from the project (and marketplace) add to developers’ knowledge, while the website is functioning to fulfill an important business role, gaining sales that can fund and bootstrap an improved version. The marshmallow stays on top, and the company avoids the risks of taking the new website online before it’s truly ready for release. Developers can iterate at one degree of scale and still design at another degree.
The most effective projects resemble the unfolding of a complete Elliott cycle. Major improvements to EWAVES resemble motive waves, characterized by wellplanned, waterfallstyle sprints. Theses prints are followed by corrective consolidations that include deep testing and refinements of what we developed. Sometimes this leads to such profound insights that we must tear down previous work and rearchitect the solution.
To discard old solutions does not mean the work done on them amounts to a waste of time. Only the actual building process allows one to grok (profoundly understand) the problem and achieve new insights and advances. Across the entire project, the flow of ideas seems to follow a power law: common, small ideas have fairly localized effects, while rarer, larger ideas require dramatic changes. Longterm progress requires periods of consolidation, because the project would become intractable without constantly refactoring the foundation at all degrees of scale. Constant iterative rebuilding ultimately leads to a beautiful and practical structure.
Mechanisms of Malleability
The idea of continuous iteration is great, but to do so in practice we need aframework that provides rapid, accurate feedback on the results of changes. Without feedback, making changes would be too risky: any modification could cause our digital marshmallow tower to fall over. Yet high risk changes must be done. Avoiding them leads to project stagnation via marriage to imperfect foundations.
In the case of the Marshmallow Challenge, feedback is built in because nature provides for physical observation: The tower stands or it falls. But software isn’t physical. The virtual world demands that we create an artificial environment to tell us how EWAVES is performing. We meet this goal primarily through our automated test infrastructure (read the Trust Nothing, Test Everything issue of EWAVES Flash here (http://www.ewaves.com/1501ewf)), which assesses the efficacy of EWAVES. Our graphical interface allows for visual experimentation, while our peer review process allows us to regularly audit and sign off on each other’s work.
Instant assessment ensures that the fluid medium of code provides the ultimate flexibility. Automatic tests allow for constant refactoring (the programming equivalent of text editing) to streamline code and make it easier to understand. Refactoring nearly always improves performance. Often it even “automagically” fixes yetundiscovered bugs as a natural side effect of accurately capturing concepts and principles. And just like text editing, removing code can be as beneficial as adding code.
In my experience, the extent of refactoring is the best predictor of whether the outcome is an elegant or an unmanageable project. On the EWAVES team, we maintain a culture which treats code as pliable, to encourage the practice of refactoring. For this to work, each change must pass testing and peer review. These measures ensure the marshmallow remains in place before we make changes official. By getting feedback first, we circumvent a maze of afterthefactsearching for causes of accidental degradation.
The free software world has long made a practice of rapid feedback to drive successful mutation, under the “release early, release often” paradigm. Its efficacy may be best demonstrated by one of the most successful collaborative projects in history:
Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers. To the amazement of almost everyone, this worked quite well.— Eric S. Raymond, The Cathedral and the Bazaar
Some proposals are too disruptive to apply piecemeal. They may hold longterm promise, yet introducing them could initially cause tests to fail and results to degrade. This is where parallel design comes into play: We can build changes “on the side.” The EWAVES 2 framework allows us to swap out various interchangeable parts, thus we can try radically new ideas without modifying the behavior of the default configuration of the program. When a new, potentially disruptive change finally becomes stable, we can choose to replace the default functionality entirely.
On rare occasions, however, a new concept diverges too greatly from the original to allow for inplace transformation. A perfect example is the transition from EWAVES 1 to EWAVES 2, in which we changed the programming language, requiring a total rewrite. Yet we continued to iterate on the idea level, even as we completely reinvented the guts.
When engineers go back to the drawing board and create a new design, they do not necessarily throw away the ideas from the old design. But they don’t literally try to deform the old physical object into the new one. The old object is too weighed down with the clutter of history. Maybe you can beat a sword into a ploughshare, but try ‘beating’ a propeller engine into a jet engine! You can’t do it. You have to discard the propeller engine and goback to the drawing board.— Richard Dawkins, The Selfish Gene
With persistence, EWAVES 2.0b has become a reality. We’ve made the transition, so now our research and iteration allows us to push EWAVES forward at an accelerated rate. The next milestone is to exit beta. I anticipate many major and unexpected breakthroughs along the way.
Extra: Bitcoin Interview
On July 19, 2017, CNBC interviewed me regarding Bitcoin which you can access here: Bitcoin bubble dwarfs tulip mania from 400 years ago, Elliott Wave analyst says (http://www.cnbc.com/2017/07/20/bitcoin bubbledwarfstulipmaniafrom400yearsagoelliottwave.html)