James Randi on audio equipment

Discussion in 'Audio Hardware' started by jdmack, Apr 28, 2006.

Thread Status:
Not open for further replies.
  1. LeeS

    LeeS Music Fan

    Location:
    Atlanta
    I think everyone here is missing perhaps the biggest assumption: that science can accurately measure an audio experience. In my experience, that is beyond current science.
     
  2. Roland Stone

    Roland Stone Offending Member

    A blind listening test does not claim to measure anything except the listener's ability to identify differences, regardless of the rationale behind them. It makes no claim to explain or measure those differences, only the ability to detect them.
     
  3. Black Elk

    Black Elk Music Lover

    Location:
    Bay Area, U.S.A.
    Agreed.

    I don't entirely agree. There is an attempt to measure the listener's grading of the perceived difference, and by doing sufficient sampling to gain a measure of how big/small the perceived difference really is. For example, if the reference is 16-bit/44.1 kHz PCM and you are comparing it to a lossy compressed version of the same music, you would expect to see a pattern whereby differences were more noticeable and would receive lower grades (in relation to the reference) as the data rate of the compressed audio decreases. This is why in double blind tests with hidden stimulus there is usually a training period where it is relatively easy for the listener to distinguish between the reference and the compressed sample. You then gradually make the test more difficult by introducing better and better encoding until it is very difficult/impossible to distinguish a difference. You can also extend the grading system beyond a simple point system to include comments to describe the perceived difference (e.g., image smaller, noticeable HF distortion, etc.), since this can be useful to the designers/developers.

    It's no mystery why most reviewers will not subject themselves to double-blind testing.
     
  4. WVK

    WVK Forum Resident

    Location:
    Houston
    I would think that presenting both subjective and objective results could please everyone. If level-matched A/B/X tests resulted in "same" or a preference for the $800 amp, objectvists could say "told you so", subjectivists could say " proves A/B/X dosen't work with audio"

    WVK
     
  5. LeeS

    LeeS Music Fan

    Location:
    Atlanta
    I have participated in several DBTs and the only thing they really excell at is discovering who in the audience has critical listening skills.

    The other problem one faces is that it can be hard to hear subtle differences unless one has sufficient time to spend with the equipment. That's all but impossible to do under A/B/X switching.

    Many high end companies design gear as best they can to meet objective requirements but listening subjectively is often done in a fashion where it has priority.
     
  6. Black Elk

    Black Elk Music Lover

    Location:
    Bay Area, U.S.A.
    Hmmmm, which causes me to reiterate what I wrote earlier: It's no mystery why most reviewers will not subject themselves to double-blind testing. :)

    Then the test is not being done properly. This is why formal DBT is both expensive and time-consuming. As I wrote in my earlier response, there should be a training period where the listener is given time to become acquainted with the listening room, listening system, the musical snippets, and the test procedure. Then, when prepared, the tests should take place in 20-30 minute periods, with 20-30 minute pauses between tests. Audio memory is very short, so samples should be restricted to 5-10 seconds.

    Of course, any test between two reference amplifiers, for example, will be more difficult, since you may not hear an obvious difference in the training period, but the goal of the test is to see whether a perceivable difference exists. Tests for identical CDs, green pens, cable lifters, etc. would all be difficult from a training perspective, however, the scoring in such tests often changes to also allow no difference.

    Any manufacturer of high-end/high-priced gear that delivers what they claim in terms of sonics should have nothing to fear from a properly run DBT!
     
  7. Metoo

    Metoo Forum Hall Of Fame

    Location:
    Spain (EU)
    Why do I feel that Randi has a lot more 'interesting' businesses to target than high-end audio? If hype is what he is wanting to target there are a lot other businesses to 'uncover' that have much more influence over people in general than the currently-in-life-support world of audiophiles (practically any 'added value' product out there today has a 'psychological' element added to it, usually deliberately BTW. That's part of what makes them 'cool'. For example, if these products were not generally thought highly of most of their users would not purchase them -- you make your own list. BTW, I am not referring to audio products in this comment).

    FYI, IMHO, there are crazy claims made about certain audio products, but there are also some 'real' products that truly deliver what they promise out there.
     
  8. Black Elk

    Black Elk Music Lover

    Location:
    Bay Area, U.S.A.
    And a properly run DBT will reveal them.
     
  9. Scott Wheeler

    Scott Wheeler Forum Resident

    Location:
    ---------------
    "It's no mystery why most reviewers will not subject themselves to double-blind testing."

    Actually I think the real reason is one that is not so well known. That being *good* DBTs are hard and costly things to do. When you consider that most reviewers are not making a living at it because there just isn't enough revenue coming into these magazines it should come as no surprise that none of them do it, not even the objectivist publicaions. And, if any of them tried to do DBTs on the cheap it would almost certainly be a train wreck. Who needs that? If you want to see a fine example of such a trin wreck just look up the DBTs Howard Ferstler did with some amplifiers for one of the objectivist rags. I think it was the Sensible Sound. It was if nothing else good for some belly laughs.
     
  10. Dreadnought

    Dreadnought I'm a live wire. Look at me burn.

    Location:
    Toronto, Canada
    DBT is inadequate for me, and not the truth bearer it is imagined to be. Extended sighted listens tell me more about a component than the quick blind switching which won't tell me if I'll be fatigued by long listens. It's impressions versus understanding. Heck, sometimes it can take me six months or a year to fully know a component. I think DBT's value is overestimated by it's advocates. At least that's been my experience. It's a pity that it's been weaponized- used as a threat by one side, feared by the other.
     
  11. Robin L

    Robin L Musical Omnivore

    Location:
    Fresno, California
    Of course, we all hear differently now, don't we? Maybe there's bigger differences between the hearing of two different people than there is between a Neumann U87 and a Shure 57. Or Bose 901's vs. WATTs. Randi's probably right about learning about acoustics, though. If nothing else, it'll come in handy if you ever find yourself engineering a recording. :)
     
  12. LeeS

    LeeS Music Fan

    Location:
    Atlanta
    This would still not provide for the fact that different people have different levels of listening skills. You can't get fair test results with a diverse audience of people from the casual listener to the trained professional or very experienced listener.
     
  13. Black Elk

    Black Elk Music Lover

    Location:
    Bay Area, U.S.A.
    As I posted earlier in this thread, I'm aware of how costly they are to undertake, I was implying reviewers who refuse to participate even if someone else is footing the bill for doing the test. They have everything to fear from DBT, namely, their reputation of being able to distinguish sonic differences between different components.
     
  14. Black Elk

    Black Elk Music Lover

    Location:
    Bay Area, U.S.A.
    Long-term listening is a valid test for fatigue, but if someone makes a claim that something makes an audible improvement, or in these days of DRM-schemes is completely inaudible, a DBT will reveal (or not) those claims.
     
  15. Michael St. Clair

    Michael St. Clair Forum Resident

    Location:
    Funkytown
    I've actually read the threads at the Randi forums where they negotiate with the applicants over the testing terms and I've found their conditions extremely reasonable. The applicants usually flake out, constantly refusing to be pinned down and repeatedly changing their requirements, and they usually don't go through with it.

    Have any of you who are critizing how the JREF conducts these tests actually followed these discussions?

    Recently Mythbusters ran a test that was a lot tougher than anything Randi has asked for. A spirits expert correctly identified all of the following, in order!

    top shelf vodka
    cheap vodka run six times through a Brita filter
    cheap vodka run five times through a Brita filter
    cheap vodka run four times through a Brita filter
    cheap vodka run three times through a Brita filter
    cheap vodka run two times through a Brita filter
    cheap vodka run once through a Brita filter
    untreated cheap vodka

    (yep, the filter does actually improve the taste of the vodka...but it doesn't make it top shelf)

    The guy was totally risking his reputation!
     
  16. Black Elk

    Black Elk Music Lover

    Location:
    Bay Area, U.S.A.
    Of course you can. It depends on what you want to test. Let's take something like Verance, which was claimed to be inaudible. You do a test with trained listeners (all pro recording/mastering engineers), you give them the choice between unencoded (A), Verance-encoded (B) and the unknown sample (X - which they have to identify as being A or B). If all the listeners can detect the samples with 100% accuracy, the encoding is clearly audible. That's one test result for a specific type of test: can pro listeners detect Verance.

    Now, assume you are in the music download business. You have a really good lossy compression algorithm. You also know that from a business standpoint you want to make the audio files as small as possible (so you can store more songs per megabyte), which also speeds up download times and makes the consumer happy. However, what you don't know is what level of quality to use. You can use a wide variety of listeners to determine their degree of satisfaction with various coders. You could then choose for the lowest quality where the majority are satisfied, or have different quality levels for different prices, or whatever. It all comes down to what you want to determine from the test. They are all valid test results.
     
  17. Black Elk

    Black Elk Music Lover

    Location:
    Bay Area, U.S.A.
    I'm [hic] twying two [hic] dupelickate [hic] that [hic] test [hic] rite know [hic]! :laugh:
     
  18. AndrewS

    AndrewS Senior Member

    Location:
    S. Ontario, Canada
    I'll have to check that episode of Mythbusters out. Sounds very interesting.

    Regarding the topic of DBTs and the mention of long term testing, etc., what I'm more interested in is many of the devices/cables/treatments et al, where people make claims like "I immediately noticed X or Y or Z" and "I immediately heard A or B or C that I've never heard before" and so on. There's no reason why the people making those claims shouldn't be able to back up them up via DBTs.
     
  19. Pinknik

    Pinknik Senior Member

    Perhaps if the psychological effect of seeing a particular component in action, knowing what you think you know about it, is that you THINK it sounds better, perhaps the psychological effect of DBTs is that you are under so much pressure that your listening acuity is actually lessened. After all, during a fight or flight situation, the peripheral vision goes and the ability to hear is decreased as well. I don't KNOW, I'm just sayin' is all. :D
     
  20. fjhuerta

    fjhuerta New Member

    Location:
    México City
    I fully agree. The room has an incredible impact on sound reproduction.

    As to why Stereophile won't take Randi's challenge, that's easy - they already failed the Carver challenge, so why should they risk their reputation any more? It's way easier just to use words like "liquid", "smooth", "effortless", without having to back them up with hard data
     
  21. fjhuerta

    fjhuerta New Member

    Location:
    México City
    Perhaps extended listening tests let you get used to the price you paid for your new system, and imagine all sorts of things. Why shouldn't a DBT reveal new, different things? It always does when comparing speakers - it should be the same way when comparing amps, preamps and CD players.

    "
     
  22. Dreadnought

    Dreadnought I'm a live wire. Look at me burn.

    Location:
    Toronto, Canada
    Stereophile is a favorite pinata for the blind folded advocates and it's true that they are mainly subjective in their reviews but I also think they don't get enough credit for their use of scientific measurements. Yesterday I was flipping through a back issue (Nov 2004 vol. 27 no. 11) and found interesting their measurement with an accelerometer of three different isolation platforms (Ginko, S.A.P. Relaxa, Vibraplane) under a VPI Scoutmaster. They always measure amps etc but I thought it cool that they did this with items of fuzzier reputation as related especially to their price tags. Certainly any mag that relied exclusively on measurements would also be doing a disservice.


    Besides knowing price isn't always an indication of quality or value, the items in question were two similarly priced power conditioners. I wanted the one with the greater number of outlets and prefered it sonically in every comparison over a number of days. I then gave it one final assessment, the extended listen at good volume of the gamut of recorded material good, bad and ugly. In other words I listened as normal rather than as a psycho-acoustic-proctologist and came to hate that previously favoured box. It was torture after two straight hours.

    So I don't condemn blind listening, I do use it but keep in mind it's limitations. It's one tool.

    The bottom line for me is that music is a subjective experience and so the gear dwells inescapedly in this region.
     
  23. fjhuerta

    fjhuerta New Member

    Location:
    México City
    Actually, I buy Stereophile because of Atkinson's measurements. It's funny to see the reviewers saying "I loved this piece of gear, it's the best I've ever heard", and then Atkinson saying "this equipment measured so poorly, it must be broken!". :D

    At least, it reminds me that maybe flat and accurate sometimes is boring to our ears.
     
  24. WVK

    WVK Forum Resident

    Location:
    Houston
    From Randi's website:

    A FIRM OFFER

    A reader has a few words about "hi-end" audio matters:


    The first person who told me that people who claim supernatural powers never seem to be able to make them work in the presence of magicians, was an old friend named Paul Ierymenko. He worked for me designing and building various electronic products in the mid 70's. He is now the head of R&D at QSC Audio. They're one of the makers of the ABX Comparator. I remember talking with him, back then, about the differences between the sound quality of various audio devices, especially amplifiers. He maintained that any reasonable quality amplifier, operating within its specified limits, is acoustically indistinguishable from any other. Ditto for many other devices as well. He had nothing but contempt for the claims of manufacturers of high end speaker cables and other magical crap like the stuff described in your recent commentary.
    This ABX Comparator is the ideal setup to test audio devices and systems. It generates a random "A or B" switching signal, so that the user does not know whether the item or variable being examined is in or out of the circuit, and it accepts the user's decisions and stores them. When the Moment of Truth arrives, the user sees the results of a proper double-blind test. This is a setup that the audio quacks strenuously avoid, in fear that their fakery will be exposed.

    Today I sent out the following e-mail letter to eleven audio reviewers who showed up on the web pages of the Shakti Stones and P.W.B. Electronics, as endorsers of some audio nonsense mentioned here last week, and to both manufacturers of the devices as well. The letter explains itself:


    My name is James Randi. I am the president of the James Randi Educational Foundation (address and contacts listed below) and I am an investigator of unusual claims. This Foundation has a prize of one million dollars that we offer, details of which are to be found at www.randi.org/research/index.html and www.randi.org/research/challenge.html.
    As a reviewer for a major audio publication, I'm sure that you will find the following offer of great interest, both from the point of view of validating your expert judgment, and adding substantially to your net worth.

    Please refer to www.randi.org/jr/073004an.html#3 and go to the item "THE JREF MILLION IS SURELY WON" to learn of the items — the "Shakti Stones" and P.W.B. Electronics' "Electret Foil" and "Red X Pen" — that I am referring to here. In my opinion — and I have none of your expertise, I freely admit — these are farcical in nature. Yet experts such as yourself have endorsed these products, and that support indicates that the JREF million-dollar prize should surely be offered, either to you personally, or to the manufacturers of these products — who have been similarly informed on this date.

    If you require further information concerning details of this endeavor, please contact me at [email protected] and inquire. This is a valid offer, a serious offer, and a sincere offer. Should any of these products prove to work as advertised, the first person who is able to demonstrate the efficacy of any of them, will be the winner of the JREF prize as described in the rules and details to be found at the above references.

    I await your response with great interest.

    The above e-mail message was sent to:

    Frank Doris, at The Absolute Sound: [email protected]

    Clay Swartz, Clark Johnson, and David Robinson at Positive Feedback: [email protected], [email protected], and [email protected]

    Larry Kaye, Wayne Donnelly, and Bill Brassington at fi: [email protected], [email protected], and [email protected]

    Bascom King at Audio: [email protected]

    Wes Phillips at SoundStage: [email protected]

    Jim Merod at Jazz Times: [email protected]

    Dick Olsher at Enjoy The Music: [email protected]

    Peter and May Belt at "P.W.B. Electronics": [email protected]

    Benjamin Piazza at "Shakti Innovations": [email protected]

    Let's see what reaction is received — if any — to this clearly-outlined challenge. Remember, all we're doing here is asking the reviewers — the trained, experienced experts, the responsible endorsers of these products — to repeat their tests of the items, but this time under double-blind, secure, conditions. And we're making the same offer to the manufacturers, who we would expect to be even more sensitive and capable of performing such tests.

    WE ARE OFFERING ONE MILLION DOLLARS IF THEY CAN DO WHAT THEY CLAIM THEY CAN DO, WHAT THEY DO PROFESSIONALLY, IN A FIELD WHERE THEY CLAIM EXPERTISE FAR BEYOND THAT OF MERE MORTALS. WE ASK FOR NO INVESTMENT FROM THEM, WE DO NOT CHARGE THEM FOR PARTICIPATING — AND WE STAND TO GAIN NOTHING BUT WE DO RISK THE LOSS OF THE MILLION DOLLARS PRIZE MONEY.

    I am a mere mortal, unencumbered by academic degrees or claims of audio expertise. Show me, and win a million
     
  25. Tullman

    Tullman Senior Member

    Location:
    Boston MA
    I think system matching is more important than DBT. Of course anyone spending thousands on playback equipment should put some of that money and time into room acoustics.
     
Thread Status:
Not open for further replies.

Share This Page

molar-endocrine