First, that test was purely about lubricity and no one has claimed otherwise. While some of this products had, or were claimed to have, some performance benefits, those aspects were not tested.
Also, you guys do realize that Arlen Spicer was just a guy with with a few buck that had Southwest Research (a big lab, google them) do the testing? He just gathered up the popular products and paid to have them tested. There was no corporate horse in the game. The good products stood out and the bad ones didn't. That test inspired several of the companies that had a poor showing to go on the defensive.
What you may think you "feel" may not equate to any real, measurable results. There are very human reactions to certain things (advertising among them) that make us think we feel something when what we feel cannot be measured quantitatively. I've been involved in testing automotive products for more than 20 years now and have seen it many times. I had to beat those things out of myself in order to be as objective as possible in evaluating the effects of products. It's commonly called the placebo effect and every human is vulnerable to it. No shame in that, but it's a fact of life. This is why I am now almost militantly skeptical when comparing people's feelings about products to actual testing where cause and effect is accurately measured.
Decades ago, in the '60s, a group of car enthusiasts were brought in to evaluate some new performance products on a Corvette without being told exactly what they were. They drove the stock car and then came back and drove the "modified" car. To a person, the people were wildly enthusiastic about the great performance increase. In fact all that was done was to install loud and fast-sounding mufflers. It was engineered so that the car gained no power or performance, it just sounded faster and a group of "car guys" were fooled by it.
I agree about not necessarily placing a high confidence in test result produce by a company on its own product. Seldom are they outright fabrications but, most often, they are "best case scenarios" where you see the best dyno run, or where everything is optimized in ideal conditions. Also, a gain or claim may not apply to your exact situation. Say a company claims, "Product X delivers a 25% benefit on a Ford IDI engine." Ok, then you find their test engine was a 155 hp 6.9L industrial engine. Well, it's an IDI but maybe you don't have a turned down, 155 hp industrial engine in your pickup. 25 percent above 155hp only brings it a little more than stock pickup rated power. When added to an IDI rated at pickup power, the gain from that product is only 5%. In other words, your result may vary. For myself, I automatically dial any result back at least 10 percent generically for companies I trust unless I know the exact scenario and how it relates to my setup. An example would be a product that processed X result on a 6.9 would produce Y on a 7.3, or an NA versus a TD.
The bottom line for me is alway the money. How much of a measurable effect am I getting for the money.